Data Recovery For Normal People is my new book which has just been released on Amazon in both print and kindle editions.
Information Technology is an area which is constantly on the move, sometimes at a speed which is dizzying and difficult to keep pace with. In particular data recovery can be one of the more complex problems you might encounter and in order to keep up with the latest trends and information you could be required to read endless tomes, watch hours of videos or trawl through myriad blogs. The sheer amount of information is often overwhelming and confusing.
Data Recovery for Normal People is a new book which aims to make this process a lot simpler. Designed for both beginners who have little knowledge of technical issues and for those who may own their own computing business and want to learn more, it delivers what you need to know in 9 parts, covering all the essential such as;
How data is lost
Understanding data storage
Recovering deleted or corrupted files
How to prevent future data losses
Encryption and data destruction
And a whole lot more
You will also find explanations behind the hardware and software, helping you to understand why some data can be recovered and some cannot.
Data Recovery for Normal People also provides a foundation in file storage in general. It is a comprehensive and important read for anyone who wants to understand a bit more about the problems that data loss can bring, and whether you are a novice or a budding enthusiast this is one book you cannot afford to miss.
Get your copy today. Use it as a how to manual for everyday issues, or as a handy guide to be picked up and studied when problems occur.
Available from Amazon in both Kindle and print editions.
Last night was the brightest supermoon in seventy years, or so they say. Unfortunately I was not doing anything special for the occasion although I did get to spend a few minutes out at the observatory and took a few pictures.
So what exactly is a supermoon? Let’s start answering that question with the fact that the moon’s orbit around the earth is not a perfect circle but more of an ellipse. At times that means the moon is closer to the earth than at other times. A supermoon is when you have a full (or new although you can’t really see it then) moon at the same time as the moon is at it’s closest point in it’s orbit of the Earth.
Put a little simpler, it is when the moon is full and close at the same time.
What this means to us astronomers and astrophotographers is that the moon appears bigger and brighter than at any other time.
This image is the moon rising above the observatory dome. Unfortunately unless you are familiar with the SHSU observatory and what the moon typically looks like out there, you may not see that this does indeed look pretty big. It was an impressive sight.
This next picture should get your attention however:
Most people would guess that this is the observatory dome right before sunrise, or sunset. They would be wrong. This was taken at 7:09pm CST facing east (the sun sets in the west, so behind me, not behind the dome). The light you see is the moon about 20 degrees or so above the horizon.
Yes, it was that bright. How bright? Reading a printed book with nothing but moonlight was not only possible, but quite easy.
When is the next supermoon?
If you missed it never fear, the next supermoon is scheduled to appear on December 3rd, 2017. It will not be quite as spectacular as this one however. If you want something this amazing you will have to wait until November 25th, 2034!
If you are even a little into astronomy or astrophotography you will hear people extol the virtues of “really dark skies”, but do dark skies really make that much of a difference? The short answer is yes, dark skies are crucial, what follows is a somewhat longer answer 🙂
Dark skies map of the US
I planned a trip from where I live in Huntsville to a little town in south west Texas named Terlingua. Terlingua is pretty much right in between Big Bend Ranch State Park and Big Bend National Park just a few miles from the border with Mexico. It is home to a real 1800s mining ghost town and 58 residents as of the last census. That is not an error, fifty eight people live there. There isn’t even a gas station in town, you have to drive five miles up the road to Study Butte for that and you better do it before 9PM or they will be closed. This area is home to some of the darkest dark skies in the nation as you can see from the dark sky map above. This is one of the few places with no light pollution in the Texas.
I picked Terlingua because I had seen some amazing pictures from photographer Lance Keimig from this area which I wanted to try my hand at. His night photography with light painting (where you add light to an object in the scene and have that mix with the natural light, such as from the stars or moon) was just fascinating. There was just no way I could do that kind of work with the light pollution around here, much less without that kind of cool scenery.
In picking the date I needed to maximize the dark skies and that meant a new moon, the Milky Way up somewhere in the middle of the night, good weather and if I could get other objects in the picture as well, that would be a bonus. The first weekend in June looked good as it had the new moon, the Milky Way would be up high at about 1am, and both Mars and Saturn would be close enough to the Milky Way to be in the shot. The weather, as anyone from Texas will tell you, could be anything.
The ten hour drive was typical until we passed Austin a hundred miles or so when things began to change. Trees started to get much smaller, grass started to disappear, larger and larger hills appeared and everything started to get rocky and sandy. Slowly cacti stated to replace shrubs in popularity.
We arrived on a Friday afternoon, checked into our motel, ate dinner in the motel’s restaurant (where they had amazing Mexican food cooked by a guy from Ireland) and took a little driving tour of the ghost town (which we were less than a quarter mile from down the road the motel was on) to find suitable places to shoot from. Finally we took a nap. Getting up at midnight would have been hard any other time but I was truly excited to try my hand at this. I had never been in these kinds of dark skies, rarely shot anything at night other than astrophotography, have never tried light painting, and had never gotten anything resembling a good Milky Way shot.
One of my shots the first night out, cropped but otherwise the JPG right out of the camera.
That first night I did a lot of shooting, learned a lot, made a lot of mistakes and as I was heading back to the motel I noticed an old rusty car with a light in front of it. It looked interesting so I stopped and took a closer look. The light was one of those you stick in the ground and it uses a solar cell to charge during the day, turning on at night automatically. This light was just laying there under a piece of plastic pipe, not stuck in the ground as it normally would be. I thought about moving the light or covering it with a blanket but my test shot showed me something interesting; the light almost made it look like the headlights on the old car were on and shining on the ground.
I had already determined that I could shoot about 30 seconds without getting too much star trailing using my 10mm lens, D7000 camera at ISO 3200. Balancing the existing light and adding just the right amount of light painting with my headlamp on low to the passenger side of the car was the trick. After a few test shots to get the lighting right, and the focus (you have to manually focus for these types of shots) I was happy enough to start clicking off real frames. This was about the third try and as soon as I saw it on the screen I knew that I wasn’t going to get any better so I packed up an went in for the night.
The image you see above is completely unedited other than a crop and resize. In fact I have my camera set to take “RAW + JPG Basic” and this is the basic JPG file, not a RAW conversion. I can’t wait until I have time to work with the RAW file. This will probably be the first print I ever do as a 20″x24″ metal print. Note the colors in the sky, not just the single stars, but in the Milky Way. How about the amount of structure and detail? All of this without editing at all, amazingly dark skies!
A shot from the second night out, unedited.
The second night out was just as amazing. I could walk out my brightly lit motel room, look up and almost immediately see the Milky Way even with a porch light three feet from my head on the right. Simply amazing. That really tells you how dark skies affect your vision of celestial objects.
The image above was taken next to the ruins of one of the miner’s homes from the late 1800s. I did not light paint this one as the red glow from a nearby cabin light gave it exactly the right amount of ambient light. This was another 30 second exposure at ISO 3200.
If you can do this with nothing but a tripod and consumer DSLR for the Milky Way, image what you can do with your astrophotography equipment and deep sky targets? I absolutely want to go back and see. Dark skies do make a huge difference.
If you want to see more images shot in these dark skies, and even some in daylight, I will soon be posting them on my photography website over at www.paperbirdimages.com so go take a look.
If you are into astrophotography for any length of time you will run across people cooling a camera sensor or read camera specifications for CCD cameras that include cooling information. Why would anyone want to cool a camera? What does it accomplish? Is it important?
Let’s start with what cooling a camera (actually the sensor) does.
Camera sensors are a grid of light sensors called photosites. These sensors react to light and create voltages or data which increases based on the amount of light that hits them. The resulting signal from the photosite to the computer in the camera can be the same for a bright light over a short exposure as it would for a dim light over a long exposure. There are however other differences.
One of the biggest enemies of astrophotography is image noise. Noise in a photograph is basically variations in the light and dark readings of the photosites. This causes artifacts to appear in the image that are not actually there. In astrophotography we tend to take lots of images, even different kinds of images, and then use sophisticated software to merge all of these together removing the noise and increasing the brightness and detail of the target we are shooting.
Many types of noise can be removed because certain types of noise changes from image to image while the signal (the part of the image we want to keep) remains the same. The software then removes the parts of the image (by complex mathematical algorithms that help average that part of the image out) that vary and are considered noise.
Noise can be caused by a variety of factors. Shot noise is the noise created every time the shutter is tripped. The act of powering up the sensors, opening the shutter and reading the sensors actually injects noise into the data stream. It stands to reason that the fewer shots the better as we would have less shot noise.
Thermal noise is caused by the sensor heating up and releasing electrons (as any heat source does). Unfortunately the photosites can not distinguish between electrons released as light, and electrons released as heat, so it tends to measure both. The longer the exposure, the hotter the sensor gets (to a point of course) and the higher the noise to data ratio making it harder to extract the data from the noise.
When the data is read from a photosite it is then multiplied by a factor dependent on the ISO/ASA set on that camera. This multiplication includes all data the camera reads from the photosites including any shot noise and thermal noise.
Like all things, this is a balancing act. Longer exposures reduce shot and read noise, but substantially increase thermal noise. Shorter exposures with more cool down time between them can substantially decrease thermal noise, but greatly increases shot and read noise. So what to do?
What if we could eliminate or at least substantially reduce thermal noise from our equation? We can, by cooling a camera.
Virtually all modern CCD cameras include a chip cooler to cool the chip. This allows you to cool the sensor of the camera by 10, 15 or even substantially more degrees over ambient. To a degree, the cooler the sensor the better (like everything of course, to a point). By cooling the chip in this manner you can use longer shots without the worry of the chip heating up, thereby reducing thermal, shot and read noise all at the same time. With a DSLR we don’t have access to the chip so we will try the next best thing, cooling a camera.
Reducing these noise sources lowers what they call the noise floor. This noise floor is a level at which data is above and the black of space would be below. You need the noise floor to be lower than the level of the target at which you are shooting. The dimmer the object, the lower you need the noise floor.
There are ways of processing lots of images such that it lowers your noise floor, but if you start with a noise floor below the level of the target to begin with your data will look even better after processing.
So what if you shoot with a camera that isn’t cooled such as a DSLR? Your first option is to shoot in the winter when the ambient temperature is low enough to help cool the sensor. If you shoot outside in a non-climate controlled area, this can substantially lower the temperature of the camera sensor if you live in an area where it gets cold at night. There can be a big difference between shooting in 80 degrees and shooting in 20 degrees. Do watch out for batteries though, they do not last nearly as long at 20 degrees as at 80 degrees so you are likely to run out of battery power several times on an all night shoot.
Another option is to have the camera modified to put a cooler inside. Yes, you can get this done to many off the shelf DSLRs but normally would not as it would make the camera difficult to use for normal daytime photography. It is also not terribly inexpensive. I would be hesitant to recommend this option and would instead steer you towards spending the money on an astronomy dedicated CCD with a cooler already built in.
Lastly you could build a contraption that covers the camera with what amounts to an ice cooler with a built in refrigerator. This method of cooling a camera has the advantage of not altering your camera so you can continue to use it for other things. It is also fairly inexpensive so you can try it to see what you think without having devote a lot of money into something that you wind up not liking.
One problem with the last idea of cooling a camera is the added weight and power requirements. Adding a bunch of weight to your astrophotography rig, especially right on the end with the camera, can cause huge tracking problems. Even if you have a massive mount and balance the setup with the camera and cooler installed, the resulting wind surface can destroy images with even a minor amount of wind hitting it. It basically adds a huge sail to the tail end of your setup.
Another problem with cooling a camera such as a DSLR is how well it can be cooled. Since the cooling will be either external (through shooting at night, or through placing the camera in a cooler) or by means of an aftermarket solution that is at best, a kludge never intended by the manufacturer, how good will the results be? Are those results worth the time, effort and/or money involved in the process?
As we discussed earlier, cold batteries do not last long. When cooling a camera that uses a battery inside the camera (as most DSLRs do) remember that your battery will not last nearly as long. You might opt for a external power adapter. Even if your camera does not typically have the ability to use an external power adapter there are things made to replace the battery on some cameras with a device that plugs into an outlet.
I have for years wanted to know if cooling a camera made a real world difference, lets actually run some tests and see what happens…
Stay tuned for furture parts of this article on cooling a camera.
Pixel size, sensor size and many other factors seem to complicate our choices for cameras these days. Just when you thought cameras could not get any more complex with ISO range, well depth, and active/passive cooling I’m here to throw another wrench or several into the mix.
Lets start with pixel size or pixel measurement which should really be called photosite size. The actual sensor on a digital camera is made up of light detectors called photosites. These photosites are what create the pixels in the image. Each photosite measures the light hitting that sensor and generates a signal in proportion to the amount of light hitting it that is sent to the processor inside your camera.
fig 1: Illustration of how photosite size affects light collecting ability
As a general rule, the larger the photosite size the more light it can gather simply because the larger area will be struck by more photons. More light striking the sensor means a higher signal output by each photosite. This in turn means it will require less amplification (a lower ISO) to achieve the same results as a camera with smaller photosites. You could also say that at the same ISO the camera with the larger photosites could use a faster exposure.
Since you can collect more light with a larger photosite size that also means that you have a higher signal to noise ratio (SNR). This is particularly important in astrophotography because we are always shooting an extremely dark object (nebula etc) against a totally dark background (black of space). Since there is so little contrast or difference between the object and the background, it is important to have the highest SNR possible. The reasoning is that when there is a lot of noise, it is much more difficult to extract the signal.
Think of it as audio. When you are at a live concert the band is the noise, then you try to talk to the person next to you which is the signal. This is very difficult to do at a heavy metal concert (high noise) but far easier at a concert featuring an unplugged classical guitarist (low noise). Since in astrophotgraphy you are always shooting long exposures (compared to normal daylight photography) and using high ISO values when you can, there is a lot of noise injected into the images.
A larger photosite size will have lower noise primarily because the accuracy of the measurement from a light sensor is proportional to the amount of light it collects. In other words, if a sensor collects one photon over a one second exposure it will be dramatically less accurate than if it collects one hundred photons. This occurs because every photosite has an amount of noise that happens when the sensor is read (read noise) and a certain about of noise per exposure (shot noise). This amount of noise does not substantially change from a one second exposure to a two second exposure whereas the number of photos captured doubles. More photons collected means a lower amount of noise in relation to the number of photons.
Now to be technically correct, the amount of noise does change as the exposure time changes, but it does so far less than the increase in the number of photons collected. In fact, the signal is the squared amount of noise, or the noise is the square root of the signal, whichever is easier for you to remember. If the signal is 900 photons, then the noise is 30 which gives you a SNR of 30. Double the incoming light to get 1800 photons and you get a noise of roughly 42.5 and a SNR of about 42.5. As you doubled the light collected in this example, you increased the SNR which made it easier for you to pull really dark objects out of the muck.
The next effect of a larger photosite is in dynamic range. A dynamic range is basically the amount of difference between the darkest a sensor can record and the lightest. In a previous article I discussed full well depth as being the maximum amount of light a photosite can store and that is an important player in dynamic range.
fig 2: A six part gray scale representing a low dynamic range
In figure 2 above the numbers across the top represent percentages of saturation of a photosite. Since this has such a small dynamic range everything from about 19% through 33% all reads as the same color. This is not what you want in astrophotography where the nebula is almost as dark as empty space.
fig 3: A twelve part gray scale representing a low dynamic range
In figure 3 we see that there is far more definition so there are two different shades for objects in the same range of 19% through 33%. The higher the number of shades on this chart, the higher the dynamic range. More dynamic range makes it easier to separate nebulas and empty space.
Two primary things affect dynamic range, the ISO and the well depth. Since a larger photosite size or larger sensor size typically has higher full well capacities and also require lower ISO values for a given exposure, they tend to have far superior dynamic ranges.
fig 4: Dynamic range and SNR by ISO for a Nikon D7000 DSLR
In figure 4 we see how the dynamic range (DR) and SNR both drop as the ISO increases. The D7000 camera used above is what is typically called a crop sensor (APS C sensor size) camera which typically has substantially smaller photosites compared to a full frame sensor camera. To get a general idea you could say that the dynamic range of a crop sensor camera at ISO 800 is 10.75 and for a full frame camera is 11.75 at the same ISO. While this is a massive generalization (and really wrong 99.9% of the time) it does give you the right idea.
This sounds great! Are there any down sides? Maybe.
One argument for smaller photosites is that they capture more detail. This stands to reason since the same amount of light would be spread across more photosites on a smaller sensor and therefor more pixels. It would simply be a matter of more pixels in the same image meaning more detail. True enough.
The opposing view is that in most cases people do very little enlargement by cropping an existing photo. This means that a photo taken with a 20MP (mega pixel or million pixel) full frame camera would have the same detail as a photo taken with a 20MP crop sensor camera. Also true.
Another concern is that crop sensor cameras have a crop factor built in. This means if you shoot a full frame image with a 50mm lens and then put that same lens on a crop sensor camera what you wind up with is the photograph looking like it was shot with up to a 75mm lens. This is because the lens puts the same size projection of light at the sensor plane (where the sensor is) and if you have a smaller sensor, less of the image appears on that sensor. This makes the image appear to be zoomed in.
While the image being zoomed in has no real bearing on the image quality, it can really throw a wrench into things when you bolt that camera onto a telescope. An object that fit perfectly on your full frame camera’s sensor now spills over the edges on the new crop sensor camera you just bought.
The last and probably most important factor is that a larger photosite size or a larger sensor size typically cost more money. This money could be spent getting something with a better cooling system or some other feature. Only you can decide where to spend your finite resources and which features are more important than others.
So what does sensor size have to do with anything other than maybe having a crop factor? A larger sensor simply has more room for larger photosites, or a higher number of smaller ones as compared to smaller sensors.
I hope you enjoyed my article on pixel size and sensor size!
This morning I viewed five planets aligned and the moon in the morning sky. It was a simply amazing sight. I had to get up really early in the morning to get out to the dark site so that I could spend a little time imaging, and a lot of time just admiring the view, and still go to work the next morning.
Screenshot from Stellarium showing the position of the five planets at approximately the time I was viewing them
The five planets that were visible were Mercury, Venus, the Moon (yes, not a planet, but still a wonderful addition to this lineup), Saturn, Mars and Jupiter in that order from west to east along the ecliptic. What you don’t hear about is that Pluto is actually there as well just to the left of Venus on the screenshot above. I was not into astronomy the last time there was a five planet alignment back in late 2004 and early 2005 and it was a little different with the order of Mercury, Venus, Mars, Jupiter and then Saturn. Depending on when you observed back then you could get the moon in there as well. I was not about to miss the 2016 planet alignment!
The moon, Venus and Mercury on the morning of the 5th.
It was a cold morning, just below freezing when I arrived at the dark site. The air was calm and clear. Once I set up my equipment and let my eyes adjust to the darkness the planets just jumped out of the sky. The moon, Arcturus and Vega also begged for attention. Even with five planets in the sky the real action for me was in the rising Sagittarius which contained Mercury, Venus and the moon.
I had of course seen all of these five planets before, but only once for Mercury, and I had never imaged it. It is far too small and bright for my equipment to do anything but render Mercury as a bright point of light just like Vega, but in a wider field with its neighbor, it was spectacular.
Venus is the most difficult I have imaged before, and for my equipment I think I have a pretty amazing image. After many sessions, tons of attempts, and more hours than I care to admit I finally got an image of Venus which showed something besides a bright dot. In the image below you can clearly see the shading on the clouds that cover the planet, amazing.
My attempt at Venus, click to enlarge and see the cloud shading
This image required the use of a video camera instead of my typical DSLR or CCD cameras. Stacking hundred is images is the only way I could get something this clear of something this small. Even with this setup, Mercury is far too small to pull this off.
Mercury is the most difficult of these five planets to image and my next chance to image it with any real meaning will be the transit on May 9th, 2016. May is a terrible weather month but I will be keeping my fingers crossed. I got lucky enough with the Venus transit so I guess it could happen again.
I hope you enjoyed my article on the five planets!
What is full well capacity? The full well capacity of a camera (sometimes called pixel well depth or just well depth) is a measurement of the amount of light a photosite (the part of a sensor that collects the light for a single pixel on monochrome cameras or that is used as the luminance value for a single pixel on a color camera) can record before becoming saturated, that is no longer being able to collect any more.
Lets explain this a little more in depth before we move on. This subject can get a little overwhelming for those who are not familiar with the parts and terminology so I want to take things slow. We will start with a discussion of what a camera sensor is and how it works before we get into well depth or full well capacity.
The sensor in a digital camera is a collection of photosites arranged in a grid. These are sensors that measure the amount of light that strikes them. In simple terms, the more light that strikes the photosite, the higher the voltage, or longer the pulse width the photosite will output. The basic thing to remember is that more light means more response from the photosite.
At some point however the photosite becomes saturated, which means it has had all it can take. At this point you can continue applying light but the photosite will not register it any more. Think of this as charging a battery, once it is fully charged, continuing to charge it will not result in any additional capacity in the battery. The photosite has reached it’s full well capacity or it’s maximum well depth.
For monochrome cameras one photosite equals one pixel and you are done. For color cameras it is a little more complicated. Color cameras (called one shot color in astrophotography because you can shoot color with monochrome cameras by taking three images; one with a red filter, one with a green filter and one with a blue filter and then combining them) on the other hand use each photosite as a pixel’s luminance value (luminance is how bright a pixel is) and then take a group of four pixels covered by a Bayer matrix to show what a pixel’s color is.
A representation of a Bayer matrix or Bayer filter
The Bayer matrix is a filter or set of filters that covers the photosites with colors, two green, one red and one blue for each block. It uses two greens because our site is predominately green so images should be as well. Each of the photosites in the block has its own luminance, but the colors from all four are mathematically calculated to give a color to each one.
This is why monochrome cameras that are 10MP are more accurate than color, because the 10MP cameras actually capture 10MP of data for both luminance and color (when shooting with red, green and blue filters) whereas a one shot color camera captures 10MP of luminance data and only 2.5MP of color data. The monochrome does require three times the number of images to make a color image however.
The capacity of the photosite to record light is called the full well capacity. The higher the full well capacity, the brighter the light you can record. You are probably wondering why we need to worry about bright lights and full well capacity if we are recording dim objects like nebulae? Glad you asked!
If all you cared about was the really dim nebula showing up in your image, full well capacity is not that important at all. Unfortunately the majority of time we have very bright objects (comparatively) in the image too, they are called stars. The brightness difference between the stars and the nebula is huge, so we need the capability to record both objects without the star appearing too bright or the nebula being too dim.
At this point you may say you don’t care about how bright the star appears, once it turns white on the image it is white and can’t get any whiter. This of course assumes you do not care about having star colors in your color image or are shooting monochrome. In those cases full well capacity would seem to matter less.
There is another problem that happens once the full well capacity is exceeded called bleed or bloat. Think about filling a glass with water. What happens when you overfill the glass? That’s right, it spills over on to the counter. The same thing happens with photosites in that the excess light can in effect (not literally) spill over or bleed over to the surrounding photosites. This causes the stars to bloat and seem to get larger.
We have all seen this in images. All stars are point objects, they appear as a single point of light. Sirius appears exactly the same size as Polaris, just much brighter. In many images however some stars appear much larger than others, this is where the full well capacity has been exceeded.
What is happening when you pass the full well capacity is the photosite is generating so much voltage that it is causing the photosites surrounding it to increase their voltage as well. This spreads the response out to multiple photosites which makes the star look larger than it should.
In astrophotography you want the black of space to be black, the star to be bright, colorful and appear as a point of light, and the nebula to appear bright and defined. The way this works is that we expose long enough to get the nebula to be nice and bright, while hopefully having enough well capacity to store all the light from the stars without exceeding the full well capacity and bleeding over.
The black of space is handled by the noise floor of the camera. This is outside the scope of this particular article but basically the lower the noise, the easier it is to capture smooth black space and separate that from the slightly less black nebula. The longer the exposure, the more light you get off the nebula without capturing anything from empty space. The noise floor is a mixture of several factors including things like camera read noise and shot noise.
So more exposure means more data from the nebula, but that can cause the stars to saturate the photosites exceeding their full well capacity. Where is the balance?
Setting your camera exposure is basically a matter of shooting as long as you can without exceeding the full well capacity of the sensor. If that does not give you enough exposure to capture any data from the nebula, then you will be forced to increase the exposure even while causing the stars to bloat.
This is why a higher full well capacity is something astrophotographers strive to have. Unfortunately cameras with a low noise floor, high resolution, and high full well capacity are very expensive. This is where the balance of money versus capabilities comes in. Get the best you can afford and do the best you can.
Remember when we discussed color cameras and the Bayer matrix? Remember it seemed a little out of place and not really relevant to the discussion? It was, and here is why it is important to full well capacity.
When shooting monochrome, since each photosite creates one pixel, bloating only really affects one thing, luminance. Color stands on its own because you are just measuring the luminance of that one photosite through a colored filter.
On the other hand, if a color camera has a photosite driven over its full well capacity, it too bleeds luminance data to the surrounding photosites, but the color data changes too. For example, if you measure the color using the scale of 0 being no light and 255 being full, and then the three colors being broken out as red, blue and green you come out with something that could look like 0-255-0 for an object that is pure blue.
If the star is a point light source and it is over a blue photosite on a one shot color camera, once the photosite reaches it’s full well capacity and a reading of 0-255-0 and then is still exposed to light it could spread to look more like 10-255-20 (remember there are two green to every one blue and one red). This can continue on to 100-255-200 and beyond. As you probably guessed this dramatically changes the color that is presented in the final image.
All of this so far has been theoretical as real life presents unique challenges. For example a star is never a point light source on Earth. Even on the highest mountains in the clearest weather there is always pollution, water vapor and more in the atmosphere distorting your vision. This makes it appear as if Sirius is indeed larger than Polaris. If you were in orbit, you would have a much harder time seeing a size difference between the two.
So is this all academic? Not really, we still strive for smaller colored stars, brighter nebula and blacker smoother space. The full well capacity of your camera is how you get there.