2

I've been reading a bit about Bayer CFAs and still don't quite comprehend how it reduces spatial resolution? I can see that there is an evident lose in color resolution due to the interpolation and reduced sampling, but I often read that sensors without a Bayer pattern have higher resolutions and that CFA's slightly blur images?

To my knowledge spatial resolution is determined by the sampling rate and modulation of contrast. Seeing as the Bayer filter does not change the Photosite count, I'm guessing it must somehow affect the contrast, which leads to this blurring that people often refer to?

In essence the question could be summarized in: "Do monochromatic sensors without a CFA have better resolutions than sensors that do?"

Thank you for your time

vannira
  • 343
  • 10

3 Answers3

4

Yes, though not by as much as one might think. In the real world, you give up about 1/√2 in resolution when measuring test shots of a black and white resolution chart using a Bayer masked sensor combined with a top notch demosaicing algorithm when compared to using a monochrome sensor with no Color Filter Array (CFA) under the same conditions with the same high performing lens.

But you also give up a LOT of flexibility in controlling the tonal values of differently colored objects.

When shooting with an unmasked monochrome sensor, the tonal relationships between objects of different colors are locked in as soon as the sensor is exposed. Any filtering to make similarly bright objects of one color a brighter shade of grey than equally bright objects of another color must be done using a physical filter at the time of exposure. The monochrome sensor only records a shade of grey. The raw files can no longer differentiate between objects that were different colors.

If we use a red filter when we shoot in monochrome without a Bayer mask, we can't go back based only on the information contained in the raw data and make it look like we used a green filter, or even an orange filter after the fact. With a digital raw file from a Bayer masked sensor, the possibilities of adjusting relative tonal values based on the colors of objects in the scene are near endless!

If your ultimate goal is to get the highest resolution possible of flat B&W test charts, then the monochromatic sensor is the clear winner.

If, on the other hand, your goal is to get the best possible results shooting in challenging lighting situations then the choice is not so clear cut.

Consider this shot, which is basically how the scene appeared to my eyes when I shot it on an outdoor festival stage at night.

enter image description here
EOS 7D Mark II + EF 70-200mm f/2.8 L IS II, ISO 3200, f/2.8, 1/500 second.

Converted to monochrome without using any synthetic color filters after the fact would give a result something like this:

enter image description here

Applying a green filter after the fact to increase contrast under red light and reduce the diffusion caused by the on stage "fog" illuminated by the red light gave me this:

enter image description here

Putting a green filter in front of the lens using a monochrome sensor would have given slightly more theoretical resolution, but the blurring from shooting handheld in low light of a subject in motion while standing on a temporary stage vibrating with the music being played at high volume levels would have probably obliterated that tiny difference. Plus, if I still had that green filter on the lens a few seconds later when the color of the stage lights changed to predominately blue - or green, or white or orange - I'd have been scrambling to remove the filter and put on an orange, red, or blue one before shooting the next photo. By which time the light would have changed colors again...

For a more complete technical explanation of how Bayer CFAs affect resolution and quantum efficiency compared to monochrome without a CFA, please see this answer to How does shooting on dedicated monochrome digital cameras compare to shooting in monochrome mode on full-colour digital cameras?

In the end, it depends upon what tradeoffs are more important to you. How these considerations are weighted will vary based on what one is shooting and the pace at which it may be shot. A fast-paced environment with rapidly changing lighting conditions would weight things one way. A more methodical shooting situation in which a plethora of physical color filters are available and can be swapped out without losing the shot because the scene has changed in the interim would tend to be weighted the other way. My work lives more in the former situation, but yours might be in the latter.

Michael C
  • 175,039
  • 10
  • 209
  • 561
  • 1
    That is a fantastic real world example. If the change in just the clarity of the beard isn't enough to convince you, I don't know what would. – Mark Ransom Mar 21 '24 at 12:13
3

Seeing as the Bayer filter does not change the Photosite count, I'm guessing it must somehow affect the contrast, which leads to this blurring that people often refer to?

That's the gist of it. If the bayer filtered image is demosaiced to create a monochrome image there is no significant loss in detail. But there can be a loss in contrast because not all of the light is recorded; individual pixel luminance values have to be recreated/calculated, and there can be errors. If an individual pixel's luminance value is calculated incorrectly, it can match (blend with) surrounding pixels when it should not.

Keep in mind that contrast is a large part of a human's perception of sharpness/resolution; so there may be a perceived loss of resolution where the resolution still exists. And at higher luminance values the error would have to be very large; and at low luminance values the errors convey as noise.

This gif shows images taken with a mono sensor and with a bayer filtered sensor (no AA). The filtered image is shown demosaiced and not. (I did not take/create this image, but I did align it some)

enter image description here

When the bayer filtered image is demosaiced as a color image it becomes more difficult to retain full resolution because the individual photosite colors have to be calculated from surrounding information; along with their luminance values. This adds another level of potential blending and loss of resolution.

The particular method of demosaicing used can make a big difference here. This image shows pixel level details demosaiced with different methods. (This is also not my image)

enter image description here


SOURCE OF GIF

SOURCE OF 2ND IMAGE

Steven Kersting
  • 17,087
  • 1
  • 11
  • 33
  • thank you for the clear and concise answer – vannira Mar 03 '24 at 20:00
  • 'Surrounding colors" aren't all that have to be interpolated. The dyes in Bayer CFAs are not an exact match for the colors used by RGB. This is particularly the case in the "red" filter, which is typically most transmissive to 590-595nm (yellow-orange), rather than 640nm 'Red' that is the target or our emissive RGB displays. All three color channels are interpolated when converting raw image data to RGB (or LAB, or CMYK, etc.) – Michael C Mar 10 '24 at 03:11
  • You can also add that resolution will be definitely be affected for saturated colours. – Euri Pinhollow Mar 10 '24 at 05:59
  • @EuriPinhollow, the same is true for any form of clipping; but it has nothing to do with the CFA really... a saturated color area will also clip in a monochrome image. – Steven Kersting Mar 10 '24 at 13:55
  • @StevenKersting I'm talking about hue and chromaticity, not clipping. If one or two channels are underexposed resolution is affected because of noise. – Euri Pinhollow Mar 10 '24 at 14:38
2

Only 33% of the brightness information at any given site is actually there. From a purely information-theory standpoint the data is missing, even worse for parts of the scene that aren't green.

You can't just assume the single-channel value of a pixel is the brightness or it would create noise in the shape of the bayer mask whenever the color saturation was high. So the brightness of a pixel is always a smoothed estimate from all of the nearby pixels, AKA, blurred. That reduces resolution because it's literally unsharpened.

davolfman
  • 250
  • 1
  • 4
  • 33% is a massive oversimplification that is outright wrong. There's plenty of overlap between what the "blue" (actually blue-violet), "green" (actually slightly lime-tinted green), and "red" (actually yellow-orange) filters allow to pass. It's the same with our retinal cones, which is how our brains synthesize color from various combinations of wavelengths of light, which have no implicit color at all (just as radio waves, UV light, X-rays, etc. have no implicit color - it's all electromagnetic radiation at various wavelengths). Most modern CFAs allow a little over 50% of all photons to pass. – Michael C Mar 10 '24 at 03:15
  • Then you've traded color info for brightness info I guess? – davolfman Mar 11 '24 at 16:57
  • In exactly the same way our retinas do. – Michael C Mar 14 '24 at 03:57