In our region, you won't see much with less than 8 hours... moisture, sea, and fog... and black-and-white to gather signal more efficiently and get more...
Sorry, I didn't notice right away that the cameras are monochrome. Now I'm really confused. I guess I need to shoot with RGB filters? And how do you later combine the data from two different cameras?
I think if you shot one hour per channel for RGB, it would be much more effective. And why do you need a UHC filter with monochrome?
You need completely different filters, in my opinion.
Just in Siril in pixels... it doesn't matter which cameras or how many... you crop to the smallest sensor size, and StarAlignment in PixInsight stacks it precisely and evenly.
I know, I'm just asking because the result isn't always worth it. Combining such factors with pixel size. I only have color sensors like that myself. Probably because with monochrome, you don't get a frame after stacking. If I stacked from two cameras like that, I'd get a square frame from the 533 or a rectangular one depending on which frame is the reference. And I'd definitely have to crop. I'm interested in the nuances, that's why I'm asking. Thanks for the explanations—my goals are purely educational.
23 Apr, 2026
Reply
Comments are available only to registered users. Register or log in to leave a comment.
Comments
You need completely different filters, in my opinion.
Comments are available only to registered users. Register or log in to leave a comment.