Never second-guess again. The new Creator License covers personal projects online and on social media. See details.

5 Shopping Cart
Your cart has been updated
checkout
Categories

Cover image via

Exclusive Interview: The Secrets Behind RED Sensors and Resolution

Todd Blankenship
Published: Last Updated:

There is a lot more to digital camera resolution than pixel count. In this interview, RED’s Graeme Nattress clarifies the topic.

Cover image via Cinema5D.

A while back, I wrote and published an article for RocketStock arguing that your camera’s purported resolution may not be the true resolution when all is said and done. The original article, titled “Why Your 4k Camera Isn’t Really 4k” spurred what was, in my opinion, a lot of very interesting debate and conversation. When it comes to resolution, there are quite a few factors at play — one of them being the modern approach to digital camera sensors and the use of Bayer pattern sensors. For more info on what the Bayer pattern is, check out the original post, or many others available — like this one from RED. There are also various other factors, like demosaicing, Optical Low-Pass Filters (OLPFs), etc. All of these things create the differing looks and color science among camera manufacturers.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Censor View
Image via RED.

The original article suggests that to figure out the true resolution of your camera, multiply the alleged resolution by .7. This is because almost all modern digital cameras use a Bayer pattern color filter array, which is 2/4 green data, a 1/4 blue, and a 1/4 red. The rest of the image gets assembled using a demosaic algorithm, which fills in the gaps between the pixels and creates color and light where it might not have existed in the data before.

However, after looking deeper into the subject (and considering some of the debate), I wasn’t satisfied with this overly simplified way of looking at it. What defines resolution? Is it subjective? Is resolution so high nowadays that the Bayer pattern doesn’t matter anymore? There is a lot to consider when looking at resolution. So, in order to gain some clarity from a true expert, I reached out to RED’s Graeme Nattress — a man who has been behind some of the most ground-breaking camera technology and color science in the history of the medium. Graeme was generous enough to answer my questions and bring a true expert’s perspective.

Let’s dive in.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Graeme Nattress

PremiumBeat: Can you tell us a bit about yourself and your area of expertise — as well as your role with RED and the work you’ve done with them? You’ve had your hands on RED color science since the beginning, correct?

Graeme Nattress: Since a very young age, I’ve been fascinated by television technology, and when something sparks my interest, I want to find out everything about how it works. I was lucky to get a Sinclair ZX81 when I was 10 years old, and that took me down the route of learning how to program. My educational background is in mathematics, so now I tie together all these aspects in my work with RED.

With RED, I’ve developed the original REDCODE compression, and I look after the image processing throughout the entire system: demosaic, colorimetry, raw development, and image processing pipeline.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Camera Sensor
Image via Shutterstock.

PB: What makes RED technology stand out within the industry?

GN: REDCODE RAW compression stands out as a piece of technology because of how it made 4K on-camera recording practical while maintaining image quality and RAW flexibility. When the RED ONE was released in 2007, recording 4k RAW data was prohibitively expensive and cumbersome, requiring a tethered recording device. REDCODE changed that by using innovative compression techniques to take the RAW data down to a manageable size while retaining image quality and the full flexibility of RAW – the ability to change ISO, color space, white balance etc., without penalty. REDCODE has continued to improve along with the RED cameras, so now it is making 8k practical.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Monstro 8K
Image via Studio Daily.

With RED cameras, no aspect of technology stands alone. Each aspect is designed to work as part of an entire system encompassing not just the camera but the whole post-production and delivery chain. Design decisions made for a camera have influence throughout that extended chain.

PB: How does the Bayer sensor strategy apply to the color science in RED cameras? Is it utilized in all RED cameras (assuming, aside from the monochrome versions)?

GN: All RED cameras use a Bayer pattern color filter array, other than the monochrome versions. Back before RED started, 3-chip cameras were considered “professional” and single-chip (i.e. color filter array sensor) cameras were “consumer.” The Bayer pattern is a very clever way to make a single sensor see color, but has the disadvantage that each pixel only sees red or green or blue, but not all three together. At low resolutions, this disadvantage dominates what we see in the image, but at high resolutions the Bayer pattern “disadvantage” gets turned around.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Bayer Array
The Bayer Array via RED

To produce a cinematic image, we need to use cinema lenses, which are designed for a larger imaging area than a 3-chip and prism system is easily capable of working with. If we keep the pixel size the same, this larger sensor leads automatically to a higher resolution, and at high resolutions, we can effectively deal with Bayer pattern limitations and still get a high-resolution image. This is why practically all modern professional camera systems use Bayer pattern sensors.

PB: Some would say that a Bayer sensor renders the true resolution of the camera at around 70% of the sensor’s resolution (or number of photosites). Is this an overly simplified way of looking at it?

GN: Yes, it’s more nuanced than that. First, let’s look at where that 70% figure comes from: in a Bayer pattern sensor, half the pixels are green, so if you assume they’re the only ones contributing to image resolution and that there’s no optical filtration ahead of the sensor, a monochrome sensor of the same number of pixels as there are green pixels on a Bayer sensor would have a horizontal pixel count of 1/root(2) or ~71%. To put that into solid numbers a 4K Bayer sensor has a resolution of 4096×2304 = 9,437,184 pixels. Half of those pixels are green (so 4,718,592 green pixels). A monochrome sensor of 4,718,592 total pixels would be 2896 x 1629 (4,717,584 total pixels due to rounding). The horizontal resolution ratio is therefore 2896/4096 = 0.707, our ~71% figure.

In the explanation above, I made two key assumptions (resolution only comes from green pixels, and that there’s no optical filtering), and they’re critical for understanding not just that the 70% figure is on the low side, but also that we run into other whole system problems (remembering the camera is the start of a chain that continues through post-production and broadcast/delivery) if we wish to take it higher.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — RED Camera
Image via New York Film Academy.

PB: How much of a factor does the demosaic algorithm play in a camera’s true resolution?

GN: A good demosaic algorithm doesn’t just look at the green pixels, and that’s the first reason why the ~70% figure is on the low side of what is achievable. Demosaic algorithms generally look at an area around the pixel being determined (i.e. figuring the green value at a red or blue pixel location for example) and can accurately infer the missing value from not just the surrounding pixels of the same color, but also from surrounding pixels of the other colors.

PB: What is the role of the OLPF when it comes to resolution?

GN: For the purposes of this article, “OLPF” refers to the fixed filter on the sensor that renders natural detail, not the interchangeable spectral filter that RED commonly refers to as the “OLPF” (such as the low-light optimized or skin tone-highlight variants).


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — RED
Image via Film and Digital Times.

All sensors are sampled systems, so we have to obey sampling theory or suffer the consequences. Sampling theory tells us what resolution we need to sample a signal (in our case, an image) to avoid aliasing (unwanted artifacts where high frequency signal content folds back into the sampled signal, causing unwanted distortion). In a camera, we use an Optical Low- Pass Filter (or OLPF) to remove high-frequency content from the image before it hits the sensor to stop unwanted aliasing artifacts.

These OLPFs have the imaging benefit of helping us avoid the worst aliasing artifacts in our images but can only do so through lowering the ultimate resolution in the image. An OLPF strong enough to entirely eliminate aliasing would blur the image to such a point where it would be unacceptable. Similarly, omitting the OLPF would inevitably lead to aliasing artifacts that ruin an image. (In the stills world, there are cameras that omit the OLPF, but aliasing is not as problematic in a still because the problems can often be fixed by hand in an image processing application, and for very high resolution sensors, diffraction and lens MTF have a much larger effect on the detail passed onto the sensor.)

It’s also important to remember that aliasing artifacts are not just problematic visually, but that they confuse broadcast compression encoders, forcing them to waste bit allocation on non-image detail, lowering the available bandwidth for the real image detail you want the viewer to be able to appreciate. Aliasing also moves in an opposite direction to real motion, further confusing any compression encoder working with motion analysis.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Camera on Set
Image Via Cams.net.

The OLPF requirements for a Bayer sensor are slightly different than that of a monochrome sensor, but the OLPF is still necessary in both cases. With a Bayer sensor, we need to account for the red and blue channels being of a lower resolution and that we’re not expecting to be able to extract out 100% resolution. The OLPF helps make the job of the demosaic algorithm easier too. This means we shouldn’t be comparing the measured image resolution of a Bayer pattern sensor (either via a simple model at ~71% or a real-world measured figure of ~80%) to a monochrome sensor at 100% because the necessary optical filtering in both cases is setting a reasonable upper limit to resolution.

This leads us all back to how the disadvantage of the Bayer pattern becomes advantageous at higher resolutions. At lower resolutions, we couldn’t sacrifice any resolution to allow for both a good resolution on the output image and the necessary optical low pass filtering. As resolution increases and sensors get larger, we’re gaining the cinematic image and are able to adequately filter to avoid the worst of aliasing and still have great image detail.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Censor Vector
Image via Shutterstock.

When we look at the perception of resolution we’re much less concerned about the finest detail a system can resolve. We can better characterize the resolution of an image through the Modulation Transfer Function (MTF) than a single number that represents the limiting resolution and our ability to perceive the finest detail. If we instead look at the MTF and specifically the resolution at which image contrast has been reduced by 50% (MTF50), we find that number will much more closely respond to our perception of image sharpness. The MTF50 figure can be improved by removing the optical low pass filtering, but only at the expense of aliasing. This means that if you want a cinematic image, high MTF50, and low aliasing, the best solution is a high-resolution single sensor camera using a Bayer pattern, and that’s why such a camera design has been embraced not just by RED, but by the entire industry.

PB: Can you explain the difference between luma resolution and chroma resolution? Does one lend more sharpness or perceived image quality than the other?

GN: Efficient transmission of digital images often relies upon “chroma subsampling” where the color or “chroma” of an image is transmitted at a lower resolution than the black-and-white or “luma” component of an image. To make the luma and chroma components, a transform from RGB to Y’CbCr is used, where Y’ is the luma component and Cb and Cr are the chroma components.

The reduction of resolution on the chroma components is either barely or inconsequentially noticed because the human visual system is much more sensitive to luma differences than color differences. The Bayer pattern similarly exploits this facet of the human visual system, placing much more emphasis on luma detail (via the green pixels making up half the total number in the array) than the red and blue pixels (each making up one quarter each of the total number of pixels).


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Histogram
Image via Shutterstock.

Looking at things the other way, say we had a 3-chip camera, each chip with 2k resolution: that’s 3 x 2048 x 1152 = 7,077,888 total pixels. We’ll have a 2K image from that configuration, but it’ll be from a smaller sensor than we generally use for a cinematic image due to the necessary prism, and we’ll have to optically filter to avoid the worst of aliasing, giving us a sub-2K-measured resolution. We won’t have to optically filter as much as with a Bayer pattern sensor, but we still need that filter to be there. Now let’s take those pixels, remove the prism, and reconfigure as a single Bayer pattern sensor: 3547 * 1995 (7,076,265 total pixels due to rounding). Even using the simple math of ~71% for the measured resolution of a Bayer sensor, that gives us a measured resolution of ~2500. So we can now see how a Bayer pattern exploits the human perceptual system to offer greater resolution than a 3-chip approach. We also get the simpler optical path of a single sensor system, and if we keep the pixel size the same, a larger sensor and all of that leads to a more cinematic image.

PB: The majority of the time, we’re oversampling our resolution for a lower display output. Is there a point of diminishing returns with resolution as it relates to final output size? How does oversampling affect the final output image (i.e. shooting 8K for a 2K theatrical projection)?

GN: Oversampling brings real benefits to the image. “Just” sampling can work very well (i.e. shooting 4K for 4K) but upsampling is something I see as problematic. By capturing at a very high resolution, you can afford to optically filter (to avoid the worst of any possible aliasing) knowing that the downsampled image (be it 4k or 2k) will still be full of detail and appear sharp. In the process of downsampling, noise is reduced in absolute level but also becomes more grain-like in texture, making any noise that is still visible in the image much less objectionable or even aesthetically pleasing.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — RED Lens
Image via Film Riot.

Putting this the other way around: if you want a sharp image (with real sharp image detail rather than artificial electronic enhancement) with low or no aliasing, the best way to achieve that is to start with a much higher resolution than you need, properly optical low pass filter, and downsample.

PB: It seems to me that the issue of resolution is much more nuanced and complicated from camera to camera than a specific number printed on the outside of a box. How subjective is the issue of resolution? Do you think that there is a level of it that is an issue of personal taste and the image you’re trying to achieve?

GN: Some aspects of resolution are certainly objective. The problem is that simple objective numbers don’t immediately translate into knowledge of what the image will look like. To describe a sensor as 8K Bayer pattern is a very reasonable description of how many pixels that sensor has, and it certainly gives you an idea of the resolution of the system, but it doesn’t tell you what the measured resolution of an image will be, taking into account lens, optical low pass filtering, and demosaic. If you want to know that, you have to shoot the camera and measure images.

The major nuance of resolution is that, in a sampled system, it goes hand-in-hand with aliasing, in that the more resolution you try to squeeze out of a given system, the greater the propensity of aliasing. It’s tempting to shoot a scene with and without an OLPF and marvel at the extra detail the removal of the OLPF provides, and think that’s how a camera should be. The problem is that for high quality motion imagery, we don’t know in advance what type of scene the camera will be pointed at, and it’s all too easy to find repetitive patterns (a brick wall for instance) that alias badly, and once those aliases are embedded in the image, you’re pretty much stuck with them. There’s a responsibility that comes with designing a camera to make good decisions, and I think that designing high-resolution systems with appropriate optical low pass filtering to achieve high-measured resolution with low aliasing is the correct approach.


Exclusive Interview: The Secrets Behind RED Sensors and Resolution — Sensor Chart
Image via RED.

PB: Any upcoming projects you are able to tell us about, or anything else you’d like to share?

GN: Recent productions shot on RED include Guardians of the Galaxy 2, Stranger Things 2, Victoria and Abdul, Wonder, The Disaster Artist, Mindhunter, and The Punisher — among many others.


Looking for more interviews with filmmaking pros? Check out these articles.

Get 100+ Stylish Corporate Elements
Give your corporate projects a classic edge with Geneva, a convenient collection of lower thirds, transitions and logo reveals.
A