Abstract background art

Bayer and 3-chip cameras - resolution comparisons


This is based on a forum post at cinematography.com where the endless debate about the resolution of single-chip cameras was in progress.

This discussion is based on two competing technologies for creating colour images. Most people know that an additive colour image, as seen on a TV screen or computer monitor, comprises red, green and blue components. However, electronic image sensors can only see in black and white. Most professional video cameras solve this by using three imaging sensors, with the light split up by colour-differentiating mirrors. These sensors, for HD work, each have a resolution of 1,920 by 1,080 pixels. This means that each of the nearly two million pixels on the screen has a red, green and blue component recorded from the scene in front of the camera.

However, cameras that use single imaging sensors (such as digital stills cameras and many consumer camcorders) don't work that way. In these cameras, a filter mask is placed on the image sensor, so that adjacent pixels see different colours. The exact technique for doing this was described by Dr Bryce E. Bayer at Kodak, and it's discussed in detail on Wikipedia.

The problem is that a camera with one 1920 by 1080 sensor that has a Bayer filter mask on it is not sampling as much real information as a three-sensor camera with no filter. This much is certain, but what difference does this actually make?

Or, to address the much-argued issue, how many real, honest K of resolution does something like Red have? I think two and a half to maybe almost three, depending on your point of view. What they're doing is arguably a misuse of terminology. Traditionally, the "K" suffix was used to indicate thousands of pixels resolution from a film scan, which would by design have sampled RGB for every pixel. Using it on a bayer sensor device which does not sample RGB for every pixel is highly questionable - usually, such things are measured in megapixels, as in a DSLR. A high end HD broadcast camera with a three-sensor array would make about two megapixels, but would contain three times more real data than a 2-megapixel single-sensor DSLR still.

The difficulty of this is that there are a large number of variables involved, some of which are considered proprietary, so more or less all you can do is shoot tests. That's then difficult because you're also characterising a lens. The two principal opportunities for creative lying in Bayer imaging are low-pass filtering and algorithms, the latter of which has become such a well-known weasel word that I nearly put it in quote marks.

The former refers to the need to low-pass filter - basically, blur - the image that lands on the image sensor, such that no fine details of it are smaller than the gaps between the pixels. This is complicated on a Bayer sensor because there are big gaps between pixels for red and blue, and smaller gaps between greens. The filter itself is, at the most basic level, just a ground glass placed an infinitesimal distance from the surface of the sensor, with the distance controlling the degree of low-pass filtering. Failure to do this correctly will result in one of two things - either more resolution shortfall than is strictly necessary, or higher apparent resolution with risk of aliasing, that crawling jaggedness in the image. Since the latter tends to look subjectively better in stills, and typically gives better numbers when aimed at a test chart, there's no prizes for guessing which option management likes best.

The second issue is the way in which the data from the sensor is interpolated to form three complete colour channels. There are a wide variety of techniques for doing this, from the extremely simple and artifact-riddled, to the extremely advanced, although at the end of the day there's no way around the simple fact that it is interpolation and you are guessing. It can be very good interpolation, directed by cross-channel assumptions about the saturation of real-world images, but it is unavoidably made up data. Beyond a certain point you can try so hard to do this that you will begin to introduce artifacts into the image, and although Red is so heavily compressed it's difficult to evaluate this, it seems likely that the some cameras do suffer to a degree from overenthusiastic output processing and insufficient low-pass filtering because the engineer was desperate to make it hit a resolution figure that management wanted to be able to advertise.


Go back button Back Go home Home Contact Go home