Monochrome vs OSC CCD cameras, which is right for you?

So you have decided to move up from a DSLR or other regular photography camera to a CCD but not face the question of which variety to get. You have heard about CCDs that are just like a DSLR in that they shoot a color image, while you have heard others shoot in monochrome and require color filters to get a color image. What to do?

Start by understanding that every camera records in monochrome, yes, even your DSLR and one shot color CCDs, and yes, even a film camera!

Above you see a representation of a Bayer matrix used in many one shot color CCDs. The gray squares are the actual sensors in the camera called photosites, each colored square (marked with a R for red, G for green and B for blue) is a filter on top of the photosite (note that all photosites are covered by colored filters, they are removed in this image for demonstration purposes). These are combined to form a single color in the camera so the output you see is in full color. Every four-pixel square (one red, one blue and two green because the human eye is most sensitive to green in normal daylight) is combined using a complex math formula to create one colored area with four pixels of detail.

You can manually do the exact same thing with a monochrome camera and three colored filters:

The four images above are the three color channels and then the final combined image. This is how monochrome imagers create color images, and of course, how your camera works. This is called RGB (easy!).

You do not necessarily have to shoot red, blue and green filters to use this function. This is also how people shoot “narrowband” using Ha, SII and OIII filters, among others. They use a monochrome camera (or in my hard headed case, a DSLR) and shoot one set using the Ha, one set using the SII and another set using the OIII filter and then combine them on the green, red and blue channels respectively (for “Hubble pallet” images). You can mix and match colors, shoot one through a regular colored filter, another through a narrowband filter, and a third through no filter at all, then combine them. You can even combine MORE than three colors by adding new channels! While there are no rules, I suggest you start with standard RGB and/or Hubble pallet narrowband to get a feel for things and then move on.

One reason all this is important is resolution. If you take a look back at the first figure this post you will notice that each photosite, or pixel, records one color. To make a real color image we just learned that you need three colors, red, blue and green. So how does that relate to resolution in the camera?

A color camera, CCD/DSLR/Point & Shoot all work the exact same way. The camera takes a square, one red pixel, one blue pixel and two green pixels and creates one color pixel from these. Basically this takes your 10MP camera and turns it into a 2.5MP camera (10 divided by 4) when it comes to colors, yet it retains the 10MP luminosity. Stripping away the techno-babble this means that your image has the black and white resolution of 10MP (luminosity) but the color resolution of 2.5MP. Said another way, it takes a 2.5MP color image and overlays that color (not the detail, just the colors) on top of a 10MP image.

I know this is a hard concept to visualize so let’s do one more analogy. Take two images, one 2.5MP in size and one 10MP in size. Convert the 10MP to grayscale (sometimes called black and white, but actually has all the gray shades as well) and print them out the same size, the 2.5MP on tracing paper in full color, the 10MP on regular paper in monochrome. Now overlay the 10MP with the 2.5MP and see the results. Note that the edges on the 2.5MP image will be very jagged compared to the 10MP so the color will not line up just right with all the edges. This will cause some blurring on the edges and your objects will not be nearly as sharp and well defined.

Enough with analogies, let’s see what that looks like:

The image on the left is a 300 pixel wide crop of image NGC2244 in monochrome, the image on the right is a 75 pixel color crop stretched over the 300 pixel monochrome image with an opacity of 50%.

This fairly accurately simulates the difference between two cameras, one monochrome and one color, with the same megapixel sensor. Notice how much sharper and clearer the monochrome image is.

So what the heck does this mean? Simply stated this means that a monochrome camera will always have better detail than a color camera if they are both rated at the same number of pixels or resolution.

If it sounds like all the advantages are with the monochrome, you would not be far from the truth. You will always be able to get better images with a monochrome CCD, period. The advantage of a one shot color camera, and it is a big one, is time.

With a monochrome CCD if you want to capture a color image and you need about one hour of capture time, that means you need at least three and preferably four (red, blue, green, and luminance). This means four hours of capturing. If you want to do the same thing with a one shot color, one hour is all you need assuming it has the same sensitivity. For four hours that may not be that big of a deal, but some images I have are made up of twenty or more hours using a one shot color!

Now you can add to your shooting time, processing time. Images from a one shot color are generally faster to process because they are already combined into a color image. Monochrome images require you to combine and calibrate the images to create a color image you can then work with, the one shot color takes all of this work out of the equation. 

So it basically comes down to this; if you are short on time or want an easier time of it, get a one shot color CCD, if you have plenty of time and don’t mind working harder to get a superior image, go for the monochrome CCD.

 


Share this post! Facebooktwitterredditpinterestlinkedintumblrmail