The Linearity of RAW

For better or worse, I have access to only one professional grade DSLR body; namely, my wonderful Nikon D700. This limits my comments on the linearity of RAW captures to this particular camera. The reader could probably extrapolate reasonably easily to the performance of very similar bodies, such as the Nikon D3. Perhaps other Nikon cameras like the D4, D800, D600, and etc, have similar characteristics; however, I would be very uncertain about making any claims concerning Canons, or especially medium format backs and bodies from Phase One, Hasselblad, Mamiya, and so on.

In the previous page on Zones, I showed the results of a set of captures of a very simple subject; viz., an outside wall of my home. I provided a set of graphs showing the relationship between Values in Photoshop and the camera’s exposure values in terms of Zone. Set up in this way, the data did not begin to reveal the full range of the D700. Here, I want to explore the D700’s exposure range in more detail.

So, I went back to my outside wall and repeated my measurements with my camera set for both 12-bit and 14-bit RAW and with compression disabled and with lossless compression enabled. I used a Nikkor 50mm f/1.8 lens on my D700 for this set of measurements. I set my camera in manual mode with spot metering, and I put the metering point roughly centered in the image in a region without any panel shadow. I obtained my 0EV reference exposure with my camera mounted on a tripod and picking the shutter speed that best hit the 0 reference with the aperture fixed at f/5.6. I chose f/5.6 in order to have the best range for lower exposure values since I had already discovered that the camera had a broader range below 0EV than it did above 0EV. This gave me the span from f/8, f/11, f/16, and f/22 to use to reduce exposure. To increase exposure, I had f/4, f/2.8, and f/2. For additional EV steps, I used changes in shutter speed. On this particular day and time, the sky had a broken cloud cover and so there was some fluctuation in ambient brightness. I discovered that 1/640s was the most stable shutter speed, but 1/800s would also occasionally show up as a 0 reference. In other words, I cannot claim that my 0 reference here is more accurate than around ⅓ stop or so, perhaps with a bias to be a touch hot. All my exposures were captured at the base ISO of 200 to obtain the widest dynamic range.

I inspected the NEFs that I obtained using the RawDigger application. I set up RawDigger to provide an average reading for a selection at my metering point. You can see the average values per RGBG channel within the selection in the center part of the panel. For example, the red channel average in the selection is 930.3 with a standard deviation of 88.4. There are two green channels because the RAW data includes distinct values for the two green pixels under the four-cell Bayer filter. For the results shown here, I simply averaged all four channel readings for each exposure. This is not quite the same as achieving a proper white balanced gray, but it is one method of obtaining a gray scale. Another reasonable approach would have been simply to use the average of the two green channels. Using the average of all channels yields a 0EV reference value of 8.5% while using only the green channels yields a reference value of 10.5% instead. The difference is about 0.3 stop.

Here is an example of a 0EV NEF from within RawDigger:

RawDigger screen shot

RawDigger screen shot

You can see from the panel at the upper left that the exposure setting for this NEF was at 1/640s, f/5.6, ISO 200. As mentioned, I dropped exposure using aperture settings down to f/22 first, and then I continued using shutter settings to 1/8000s. This gave me readings down to -7-⅔ EV. I then proceeded back up in EV, again by aperture value to f/2, and then by shutter speed to +5EV.

I repeated this experiment for all settings of Active D-Lighting, compression, and RAW bit range (12 or 14). I could not find any appreciable difference in the resulting curve, given that the RAW values were normalized to the appropriate maximum (2^12-1=4095 or 2^14-1=16,383). You can see the results here, in which both the 12-bit and the 14-bit data are given. If anything, the 12-bit data seem to show less of a “toe” at the lowest EV settings that the 14-bit data.

Linearity of RAW exposure

Linearity of RAW exposure

A linear fit to this data, performed on base-10 log values of the RAW averages, gives a slope of 1.037 with a correlation coefficient of 0.999, given that the top and bottom two data points are dropped. The graph shows very linear performance over a 12-stop range. There is one anomaly however. It is that the value obtained at 0EV is about 8%, instead of the Holy 18.4%. I can say that on other occasions of performing this test, I have found the 0EV average at 7%. I put this variability down to fluctuations in the ambient light, which is of course due to indirect sunlight with the presence of some clouds. I cannot control for variations in light at each EV stop. The differences between 7% or 8% and the Holy 18.4% are 1.39 or 1.12 stops, respectively.

This curve can also be shown in a more traditional way in terms of density, using the formula Density = – log(Yn/Y), where the Y values in this case represent the average RAW readings. (You can jump ahead to the page on Density if you are unfamiliar with this concept.)

RAW values expressed as Density

RAW values expressed as Density

This graph is almost the opposite of a film negative H-D curve in the sense that density decreases with increasing exposure; that is, the camera is a “positive” representation of light rather than a “negative” one. The other fundamental difference is the extended linear range shown here. Finally, we have the odd calibration factor that gives a density of about 1.07 instead of the expected 0.735, corresponding to 8% instead of 18.4% for the middle gray exposure reference.

Presumably, this reduced exposure reference was engineered into the camera to provide a greater overhead to blown highlights. Elsewhere in my blog, I’ve provided references that suggest most DSLR camera manufacturers build in a ⅓ to ½ stop overhead; but here we’re seeing over 1 stop of overhead.

One observation about this NEF data, as expressed in terms of density, is that the maximum densities are about a factor of 10 higher than what can be achieved by ink jet printers on typical high quality papers.

Next, I brought each of these NEFs into Photoshop CC as 16-bit images without any adjustments in Adobe Camera Raw. I applied a 200 pixel Gaussian blur and then took a 101×101 pixel average reading near the metering point on the image. I read off both the 16-bit L* and 16-bit K values, and then normalized the readings to a percentage. Here is a graph of the results:

L* and K versus EV

L* and K versus EV

Clearly, the transformation into the usual working format in an image editor has applied a nonlinear compression function on the RAW data. The curves for L* and K are obviously not identical within Photoshop, but close, with the K values giving a reading closer to the intended 50% middle gray. The L* values appear to be almost exactly ½ stop hot on this curve. These curves look much more like the H-D curves of film days, having significantly reduced the linear range of the original image data. However, from other results that I’ve shown in this section on Zones, it is apparent that the information in both the shadows and the highlights can be recovered using applications like Lightroom, Adobe Camera Raw, Capture One, Capture NX2, Aperture, DxO Optics, and other RAW processors.

Previous Page … ••• Next Page …