22

About two years ago I was at a spot with a beautiful night sky with no moon so I tried to get some pictures of the sky.

I wasn't well equipped back then, I had a 35mm f1.8 lens and Nikon D50. I didn't shoot RAW back then. I had to manually try to find infinity focus as the camera could not do it.

These were shot at f2.5 and ISO 1600 and I cropped them using two stars as a guide.

You can see that the colors of the same stars vary a lot. On bottom left, you can see a star go from magenta to purple. I think this goes way beyond what can be explained by ISO noise.

What causes the randomness in colors in those pictures?

enter image description here


enter image description here

Individual star close-ups. Remember that this is taken from a JPEG file:

enter image description here enter image description here enter image description here

Tomáš Zato
  • 1,050
  • 1
  • 11
  • 25
  • 4
    Are you sure they are "fake" ? The farther the distance of a star from you, the more the light of the star shifts to the red side of the spectrum. The differences in colors may or may not be caused by one factor, Could be physics or could be camera, or both. OR? May not be a star at all. – Alaska Man Aug 16 '20 at 17:07
  • 9
    @AlaskaMan The OP's point is that some stars have a very different color between the two shots. – xenoid Aug 16 '20 at 17:17
  • 3
    @xenoid OK, but that does not discount my point, my comment is still food for thought. The characteristics of the light may change depending on factors other then the camera. I.E. The color of the light may be affected by the Physics of space from one moment to another. "Twinkle, twinkle, little star, How I wonder what you are!" – Alaska Man Aug 16 '20 at 18:31
  • 8
    @AlaskaMan, Sounds like you are talking about Hubble's Law, which says that the light from distant galaxies is red-shifted, on average, by an amount proportional to their distance from our own galaxy. It doesn't apply to stars within our own galaxy. (i.e., does not apply to any of the stars that you can see in the sky with your naked eye.) But if you stare at the night sky, you probably will notice that some stars actually are different colors from others. https://en.wikipedia.org/wiki/Stellar_classification – Solomon Slow Aug 16 '20 at 22:25
  • 12
    Red-shift doesn't really come in to play with stars that are visible in the night sky. The shift occurs when an object is receding at a noticeable percentage of the speed of light. Spectral shift can be detecting using very sensitive instruments ... but not with a typical camera. Stars in our own galaxy aren't approaching or receding at high enough speeds to be noticed by a typical camera. – Tim Campbell Aug 17 '20 at 02:11
  • 3
    "Twinkling" of stars is an effect named "atmospheric scintillation" ... this mostly results in distortion. If the stars were very low in the sky (e.g. perhaps less than 20* above the horizon but on extreme nights it could be more like 30° and rarely 40°) you might notice color-shift as they twinkle. – Tim Campbell Aug 17 '20 at 02:15
  • 6
    @AlaskaMan: Green stars do not occur. – Joshua Aug 17 '20 at 18:35
  • 1
    @Joshua (not even an) exception that proves your rule Was the “green star” event in NGC 3314 ever figured out or named? :-) (also see Why don't we see purple stars) Further supporting there not being any "green" blackbodies: Why do metals only glow red, yellow and white and not through the full range of the spectrum? But there are green comets! – uhoh Aug 18 '20 at 01:30
  • 1
    Wikipedia has a pretty graph on the colors reached by black body radiation: https://en.wikipedia.org/wiki/Black-body_radiation

    Should a star deviate from this graph, it's caused by errors or some really exotic surrounding matter, like a cloud of green gas.

    – Tomáš Zato Aug 18 '20 at 10:23
  • @TomášZato-ReinstateMonica Just for research purposes, could you take a long exposure long enough to intentionally blur the stars? That would allow us the opportunity to tell the difference between stars and hot pixels, which do no move with the stars when the camera remains stationary during exposure. – Michael C Aug 19 '20 at 00:50
  • Sadly, this has been 2 years ago, up in the mountains. Right now I'm in the capital where light pollution makes it impossible to see the stars. – Tomáš Zato Aug 19 '20 at 09:58

4 Answers4

33

If you are focusing well, stars are comparatively likely not to occupy significantly more than a single pixel. But pixels are covered with a regular grid of color filters, the Bayer filter typically using an RGGB arrangement for 2×2 cell grids. A so-called demosaicing algorithm making use of redundancy/correlation of luminosity information then tries to reconstruct RGB information. But if a star lights only a single pixel, there is no redundancy/correlation to work with for estimating the color distribution.

So if you want a good estimate of colors, you'd need to defocus a bit so that stars get a chance to touch more than a single pixel. Systematically, you can do that by using diffraction, namely very small apertures. Ironically, it may also help if your sensor resolution is better than what your camera optics may be able to deliver.

You may also try recording raw images and then playing with various demosaicing algorithms: some may work better with the inherently problematic situation (possibly by leaning to a stronger default behavior of preferring to guess "white" in the absence of better information).

One thing also worth noting is that color filters have wide selectivity, and demosaicing algorithms tend to assume correlations between colors due to the image elements mostly being reflective and sharing a common illuminant. That assumption does not work on a star picture because every star has its own independent color spectrum. So this can be a reason that more complex demosaicing algorithms usually considered to be superior can actually work worse in this situation, making differences to the exact alignment to the pixel grid produce worse color variation than a different algorithm would.

  • 2
    Stars always occupy more than a single pixel (you would have to have an exceptionally low resolution camera with massive pixels ... so that they look more like tiles). This is because while stars are so distant they technically qualify as a "point source" of light, that light travels as a wave and the wave nature of light causes the stars to focus as an Airy Disk (named for Sir George Airy - the astronomer who discovered the effect.). They always occupy several pixels on any modern camera with even modest sensor resolution. A Bayer Matrix will not cause a color shift in stars. – Tim Campbell Aug 17 '20 at 02:18
  • 3
    @Tim Campbell: you are confusing the effects of light travel with the diffraction limit (the latter is what is creating the Airy Disk image on the sensor). At F2.5 (as the OP wrote), you are nowhere near the diffraction limit of your optical path so the Airy Disk is really small. The antialiasing filter will likely make more spread, but it's still such a low amount that the demosaicing does not get enough point spread to work well. And with regard to "modern camera with modest sensor resolution": the OP specifies a 6MP DX sensor camera. –  Aug 17 '20 at 06:30
  • 1
    @Tim Campbell: by the way, the 6MP DX size sensor is a CCD, so yes, the light sensitive areas do look like tiles. The camera's review on the dpreview site complains about a significant tendency to Moiré patterns, indicating that the anti-aliasing filter (if any) is likely to be not particularly strong in its effect. –  Aug 17 '20 at 10:15
  • 1
    The stars do not occupy a single pixel. Assuming the camera doesn't upscale too much, there - in my opinion - enough raw pixels to not ruin the color too much. If this was a single pixel thing, then statistically most stars would be green, wouldn't they? I added a close-up of one of the stars from the JPEG. – Tomáš Zato Aug 17 '20 at 10:49
  • 1
    @TomášZato-ReinstateMonica "statistically most stars would be green" would be before demosaicing. Now your closeups also indicate some non-trivial amount of chromatic aberration. Where its effects synchronize with the Bayer filter grid, demosaicing will get particularly messed up. That would be worst in red/blue (least represented and largest wavelength difference). The sampling grid for red/blue is actually half the density in X and Y orientation than the final results are. –  Aug 17 '20 at 11:09
  • 1
    I really wish I used RAW back then, that would make it easier for me to understand what happened. – Tomáš Zato Aug 17 '20 at 11:20
  • 1
    @TomášZato-ReinstateMonica Indeed picking raw is a good investment into the future of your pictures since all future developments in offline processing (including better sharpening and denoising algorithms) will become applicable to what your sensor captured. My current go-to camera is 15 years old with excellent optics, and there is rarely a point in utilising a JPEG image because of the quality difference (even though storage speed for raw is painful). With an 8-year old camera I also use, the difference is much less pronounced. Yet. –  Aug 17 '20 at 11:35
  • 1
    @user94588: Light travels and behaves as a wave. Because of this, when you try to focus it, you'll never get it to land on just one single pixel. Diffraction only occurs because light travels as a wave. Even the faintest/tiniest stars in my AP images occupy quite a few pixels. Auto-guider cameras rely on this as this is how they can detect even sub-pixel tracking errors of the telescope mount. – Tim Campbell Aug 17 '20 at 14:09
  • 2
    @TimCampbell In a camera, the most relevant contributor to diffraction (which of course is due to the wave nature of the light) is the aperture. For a DX size sensor at aperture F2.5 on a 6MP sensor, diffraction makes for an Airy Disk with the main lobe in subpixel size. Since that is likely to lead to Moiré patterns, sensors with less than miniscule pixels are typically preceded by antialiasing filters causing a larger diffraction spread. But again, the main lobe tends to be somewhat less than pixel size, and this particular camera has tested as not suppressing Moiré patterns well. –  Aug 17 '20 at 15:42
  • 1
    @TimCampbell The D50 is an old design that has photosites almost 8 micrometers wide, which is twice as wide and four times as large in terms of area as a typical "modern" camera. – Michael C Aug 17 '20 at 18:01
  • 1
    @TimCampbell You cannot treat the energy of a classical light wave the same way as the energy of a photon. The energy of a photon is E = hf. The energy of a light wave is proportional to the square of the amplitude of oscillation of the electromagnetic wave. These are two completely different models of light, classical vs quantum mechanical. You cannot combine the photon view with the wave view. The photons are particles. They don't oscillate. – Michael C Aug 17 '20 at 18:03
  • 1
    But none of that really matters when talking about demosaicing algorithms. You can have two point sources of light with small Airy discs the same size, but if one is centered in such a way that more of the energy falls on photosites filtered with a green filter and the other other is centered in such a way that a red filtered photosite is in the center and thus more red photosites within the area of the Airy disc receive a greater percentage of the total energy in the Airy disc than in the first case, then they will wind up different colors after demosaicing. – Michael C Aug 17 '20 at 18:11
  • 3
    Yep. This is why astronomers almost never use sensors with mosaic filters. If we want a color image, we use a monochrome sensor and combine exposures taken through multiple filters. Or we use multiple sensors. – John Doty Aug 17 '20 at 18:55
  • 1
    @MichaelC Light behaves like waves when it diffracts, and particles when it decoheres in a detector. In a diffraction spectrometer, we diffract waves but detect photons. It isn't that light is either waves or photons: it's both. You not only can combine the wave and particle viewpoints, you must do so to understand optics. – John Doty Aug 17 '20 at 18:59
  • 1
    It's both at all times, but not at the same time. Just like the cat is both in the box and not in the box until we look inside the box. – Michael C Aug 17 '20 at 20:15
  • 1
    @MichaelC E=hf and f = c/λ. c is the speed of light. λ is the wavelength. The definition is based on wavelength. But this is starting to wander off-topic from the original question. – Tim Campbell Aug 17 '20 at 20:38
  • 1
    @MichaelC regarding debayer w.r.t. the OP's question: I think the point may now be moot because the OP posted extreme crops of the stars in question and they appear to occupying a fair number of pixels (the smallest one I counted was 7 pixels across in the narrowest direction). Based on this information, it seems like debayering issues may be a dead-end. – Tim Campbell Aug 17 '20 at 20:41
  • 6
    @TimCampbell "Occupying a fair number of pixels" does not help a lot when the main weight is focused on very few. The "individual star closeups" don't appear to deal with color-changing stars but instead all show a white core. Without any further indication from where they have been taken (both picture and location in it), they don't show much more than a color gradient consistent with chromatic aberration that an old camera does not correct. "I cropped them using two stars as a guide" does not sound like identical framing, so interactions of CA with demosaicing would vary. –  Aug 17 '20 at 21:16
  • 1
    @user94588 I'm not sure what you mean by "the main weight is focused on very few". When debayering, there is no "main weight" per se. There's the single pixel being debayered... and the nearby pixels used to determine the missing colors. There are numerous possible algorithms ... this paper describes a handful of them: http://stark-labs.com/craig/resources/Articles-&-Reviews/Debayering_API.pdf Keep in mind the OP mentioned the samples above are JPEG -- we don't have original RAW data (which would make things easier.) – Tim Campbell Aug 17 '20 at 22:04
  • 1
    @TimCampbell Without some continuity in luminosity (like when multiple red pixels show comparable luminosity), determination of colors and edges is not reliable. Demosaicing algorithms are designed to minimize patterns on homogenous areas and edges, but a star image does not have large and consistent enough point clusters to provide homogenous areas and edges. Demosaicing is an underdetermined problem: it requires sufficiently codependent image data to produce convincing results. –  Aug 17 '20 at 22:20
  • The fact that there are seven pixels with signal above the noise floor after demosaicing does not mean that all seven of the photosites that correspond to those pixels received signal. Just look at a "hot pixel" at such magnification. A single photosite can cause a surrounding number of pixels to show signal due to demosaicing. – Michael C Aug 18 '20 at 00:39
  • What a single hot pixel can look like after demosaicing. Notice that different processing settings can affect the extent to which a single hot pixel can have on surrounding pixels. This example is from a camera with a CCD sensor, like the 90D in this question, rather than a camera with a CMOS sensor. – Michael C Aug 18 '20 at 00:51
  • @user94588 Did you read the article I linked (by Craig Stark)? There are algorithms that work extremely well with stars (e.g. VNG). Debayering could require as little as 2 extra pixels... or it could use 25. Regardless of algorithm, only one pixel is debayered at a time. The other pixels are used to determine the missing color channel values. – Tim Campbell Aug 18 '20 at 19:04
  • 1
    @TimCampbell "only one pixel is debayered at a time" does not keep a single lit pixel from spreading out through demosaicing since its surrounding pixels, "one at a time", consider what the lit pixel next to them implies concerning their own color. Also, there are iterative demosaicing algorithms that just don't fit the "one at a time" description sensibly. –  Aug 18 '20 at 19:25
  • @MichaelC A single hot-pixel could influence the color data. A great way to see this is to inspect your dark frames. The de-bayered hot pixels don't resemble what the OP posted. This is why I previously stated ... I think this is chasing a dead-end. "These are not the droids you're looking for." – Tim Campbell Aug 18 '20 at 19:29
  • @user94588 it's spread is extremely limited. Keep in mind that when debayering a pixel (let's assume a very simple bi-linear method), a 'green' pixel being debayered is missing red data and blue data. It will sample it's adjacent red pixels to establish a mean red-value ... and similarly for the blue. It is now a 3-channel pixel. When the debayer process moves to the next un-debayered pixel, it will only sample the non-delayered color channels from it's neighbors. I am not aware of any algorithm in common use that iteratively uses debayered channel data. For AP work, VNG is preferred. – Tim Campbell Aug 18 '20 at 19:46
  • 1
    @TimCampbell The usual debayering algorithm in cameras is AHD. As presented in its paper, it operates on 5×5 environments (implemented separably as a 1×5 and a 5×1 filtering step) for its initial implementation before its adaptive homogenisation step (that is not likely to work well with single-dot information). This question was about the results produced by a digital camera's algorithms, not about raw processing demosaicing algorithms preferred for offline raw processing of astrophotography images. –  Aug 18 '20 at 20:37
  • @TimCampbell Except debayering is never that simple because the "red", "green", and "blue" filters in a Bayer mask do not correspond to Red, Green, and Blue used for RGB, all of the incorrect illustrations on the internet that show Bayer masks the same colors as subpixels in RGB monitors notwithstanding. Particularly for the "Red" channel, the filters mimic the human retina and the "red" filters are most transmissive at around 590-600nm (yellow-orange) instead of 645nm (red). – Michael C Aug 19 '20 at 00:42
  • @TimCampbell The point about hot pixels is that a value measured in a single sensel can influence the demosaiced values of many surrounding pixels. Likewise, strong signal to a few sensels that are surrounded by many sensels with very low signal will influence the demosaiced results for the pixels that represent those sensels. With astro and Bayer masks, just because a pixel shows a higher value than the noise floor is not proof that the sensel that pixel represents received light. – Michael C Aug 19 '20 at 00:46
  • @MichaelC The filters aren't a problem. The filter trims the red transmission ... but that results in less "red" in the image. My color AP cameras are full-spectrum (all but one). The one camera that isn't full-spectrum allows a mostly full visible spectrum and more of a hard-cut on the UV and IR (instead of a gradual ramp-down like a traditional color camera). The mono cameras use filter wheels so I control the filtering. The debayering should not attempt to make any adjustment/compensation ... that can only mess things up. The OP didn't mention using an AP modded camera. – Tim Campbell Aug 20 '20 at 02:17
  • 1
    @TimCampbell "The debayering should not attempt to make any adjustment/compensation" Uh, what do you even think demosaicing is? –  Aug 20 '20 at 08:53
  • @TimCampbell The camera in the question, which produced the example images in the question, is a Nikon D90, not your astro camera. – Michael C Aug 20 '20 at 09:12
  • @Tim, a red filter does not trim transmission of red wavelengths. It trims transmission of non-red wavelengths, particularly those wavelengths further from the peak sensitivity of the filter will be trimmed the most and those closer to the peak sensitivity will be trimmed less. The filters in a Bayer mask are no different than the color filters we used with B&W film back in the day. An orange filter darkens everything except orange. A blue filter darkens everything except blue. – Michael C Aug 20 '20 at 09:19
  • Here's a graphic that shows what the actual colors to which our short wavelength, medium wavelength, and long wavelength retinal cones are most sensitive. Our Bayer mask filters mimic that. – Michael C Aug 20 '20 at 09:22
  • @MichaelC Red-blocking ... not red-passing. This isn't the same as the 'red' components of the Bayer CFA. At the Ha wavelength (~656nm -- which is what astronomers are usually interested in) a typical color camera is blocking around 75-80% of the light. This the IR filter in your camera -- not an attachable filter that you can remove (well... not easily, some people do mod their cameras) but the same filter trims red wavelengths well into the visible spectrum. – Tim Campbell Aug 20 '20 at 13:49
  • @MichaelC Ahh, I just noticed your comment on the Bayer filter and this may be the confusion. No, the Bayer filter does not mimic the sensitivity of the eye. There's a UV/IR blocking filter in the camera. It's in front of the sensor, not on the sensor itself. Here's a site with instructions on how to mod the camera by removing and replacing that filter http://dslrmodifications.com/. The Canon 60Da and 20Da as well as the Nikon D810a are pre-modded by the manufacturer. These cameras have the same sensor (the sensor itself is not special) as their non-'a' counterparts. – Tim Campbell Aug 20 '20 at 13:56
  • 1
    @TimCampbell Your understanding of the asked question and answer differs so much from that given in the answer that it does not appear to make sense for you to present a contrary viewpoint in comments rather than creating your own answer from scratch. An answer is not usefully amended by having 20+ warring comments to it. Just provide your own more correct answer and watch it get voted to the top eventually. –  Aug 20 '20 at 14:12
  • @TimCampbell Why do you keep arguing with things no one else has said as if that is what you are responding to? The camera in question that produced the examples above was a Nikon D90, not a Canon 60Da or Nikon D810a, etc. The peak sensitivities of the filters in Bayer masks fairly closely mimic the peak sensitivities of the cones in our human retinas. Human retina: 420, 535, and 565 nanometers with broad sensitivites that overlap. Typical Bayer mask: 455, 545, and 590 nm with broad sensitivities that overlap. RGB: 480, 530, and 640 nm with fairly narrow bands of output that "fool" us... – Michael C Aug 20 '20 at 17:56
  • ... into perceiving colors throughout the spectrum. – Michael C Aug 20 '20 at 17:56
  • Introducing color spaces, physics and biology behind them. Pay special attention to sections 6, 7, 15, and 16. – Michael C Aug 20 '20 at 18:10
  • Pardon my short term memory. The camera in question above is a Nikon D50, nor D90. Still a CCD camera from the mid 2000s with a typical Bayer mask, IR cut filter, and weak low-pass filter in front of the sensor. It is not an astro camera by any stretch of the imagination, with demosaicing algorithms designed for more typical bright daylight illuminated scenes or scenes illuminated by dispersed artificial lighting, and not designed to provide accurate color for point sources of light. – Michael C Aug 20 '20 at 18:20
19

I can tell 3 common reasons for weird/fake colors in astrophotography:

  • Chromatic aberration makes some starts appear white in the center, but their borders blue or red, depending what of those two are out of focus.
  • Demosaicing algorithms tends to fail for bright white objects against a dark background, and you see red or blue in one border of some stars. Noise makes it worse. See these examples.
  • Automatic white balance: If you are using no filter, just set WB in daylight.

Of course, stars may be red or blue, and most of the cases is OK to get those colors.

F/2.8 is a bit wide open and I'd expect some optical aberrations in the corner of the image. If posible, shoot with f/5.6 or f/8.0.

vsis
  • 1,251
  • 4
  • 18
3

On the second shot the stars seem to have all shifted towards blue, so I would assume a camera auto-white balance trying to make sense of a mostly black picture that makes it very sensitive to minor changes. Is the color temperature coded in the EXIF data?

xenoid
  • 21,297
  • 1
  • 28
  • 62
  • 7
    It only says "White balance: sunny". White balance sounds like a possible explanation, but it does not seem like the change was uniform across all the stars. – Tomáš Zato Aug 16 '20 at 18:33
3

ISO 1600 is the upper limit of D50's sensitivity, therefore the picture is likely to get a bit noisy. Noise is not guaranteed to be uniform across color channels, so it may manifest as color changes.

IMil
  • 181
  • 3