# Zones, tones, and tone curves

In the previous page, I ran through the concept of exposure Zones. I used the LAB color space in Photoshop, and specifically, the L-channel of that color space to analyze the Values corresponding to each exposure Zone. The reason for choosing the L-channel of LAB is that it has been carefully designed to echo the manner in which human vision works; and its midpoint (L=50) corresponds to the luminance of an 18% diffuse neutral surface. [BTW, if you are new to LAB, I strongly recommend two volumes by Dan Margulis: Photoshop LAB color and Modern Photoshop Color Workflow.]

However much I might like LAB though, I understand that the world revolves on the axis of RGB color, and not LAB. There are many problems with RGB color spaces, especially in the context of B&W work; but these are almost universally ignored. It therefore behooves me to tie LAB luminance values (L-Values), which make sense, to RGB values, which don’t. The first thing to observe is that there is no single mapping between something as simple as an L-Value and a neutral RGB-Value. The conversion depends upon the tone curve of the RGB color space that you are considering. All of the RGB color spaces in common use have been designed around the performance of computer monitors. This means that they are, essentially, part of certain standards for equipment manufacture. Unlike LAB, they have nothing to do with how human visual perception works, and hence, nothing to do with how a client will view a print, or even, really, how they will view an image rendered on a screen.

In a broad discussion of color spaces, the topic of color accuracy is a driving force. We talk about the color temperature of the illuminant and the values of the primaries and so forth. Here, I am going to ignore color completely and focus instead on the tone curve. You may think of the tone curve as the “backbone” of a color space. In the LAB color space, the tone curve is the relationship between L-value and Y/Yn, as I have set forth earlier in this exposition on the Zone System. This is because in LAB, the L-channel carries information about the luminance while the a and b channels carry information about color. In every RGB color space, luminance and color are tied together in the combination of the three channels. We can avoid that problem, somewhat, by considering a neutral RGB value in which all three channels have the same value. To keep things simple, that’s just what I propose to do for now.

In a DSLR sensor, the value of Y/Yn is basically the ratio of any actual pixel value to its maximum. For example, if I am shooting 14-bit RAW files, the maximum value is $2^{14}-1 = 16,383$. If some pixel had a value of 2,949, then $Y/Y_n = 2,949/16,383 = 0.18$, or so. If I convert a RAW file into the LAB color space, I expect this pixel to get the L-Value corresponding to the mathematical equation between L and Y/Yn (L=50). [That’s not quite true for Canon and Nikon DSLRs because of 1/2 or 1/3 stop offsets that they build in, but forget that for now.]

A DSLR can be thought of as a digital image input device in the context of a broad computer workflow model that goes from image capture, through editing & post processing, to image output. Another digital input device is a scanner. Digital output devices would include monitors and printers.

An output device like a monitor has a gamma correction applied that is its tone curve. The idea is fairly straightforward. Since it is known that human vision has a response curve like that of the L-channel, it is appropriate to encode data to be sent to a monitor so that a 1-bit change in the data corresponds to some uniform visual change. For example, if we change the data for a set of RGB pixels from 128 to 129, or from 10 to 11, or 245 to 246, (all out of a maximum of 255) we would think that the change in tone was the same in each instance. There is some sense in this. It is just what the L-channel tone curve does. But rather than encode the L-channel curve, it is more common to use a gamma curve to approximate the L-channel equation.

A true gamma curve is of the form

$y = A x^\gamma$

where $\gamma$ (Greek letter gamma) is the exponent of a “power-law” and $A$ is a constant, typically 1. The inverse expression is equally important; that is,

$x = (y/A)^{1/\gamma}$

For the AdobeRGB color space, the value of $\gamma$ is 2.2. For the ProPhotoRGB color space, $\gamma$ is 1.8. These gamma curves define a relationship between a $Y/Y_n$ intensity and a value sent to a screen pixel. Suppose, for example, that I want to get a middle gray value of $Y/Y_n = 0.18$ sent to a monitor that is rendering an AdobeRGB image. The corresponding input value, for this output, is $0.18^{1/2.2} = 0.459$. That is, $0.18 = 0.459^{2.2}$. Notice that the correct middle gray value to the screen is not 0.5 in AdobeRGB. Likewise, in ProPhotoRGB, the input value to yield 0.18 is 0.386.

If we look at this the other way around, a middle gray value of 128 on an 8-bit AdobeRGB file will yield an output to the screen of 0.218. A ProPhotoRGB value of 128 will yield an output to the screen of 0.287.

What are we saying here? An RGB computer monitor has some maximum values of red, green, and blue. Ignore for the moment issues of calibration. Let’s just say that if we send R = 255, G=255, and B=255 to the monitor, we shall get some form of white from the screen. To be a little more specific, we don’t really send numbers to the screen, we send some electrical signal that we can measure in Volts. So, these numbers for the channels will be translated into some voltage. Without too much concern, let’s say that the maximum value is 1 Volt. I can get 1 Volt to the red channel of a pixel, 1 Volt to the green channel, and 1 Volt to the blue channel. That’s what I’ll get if the RGB values are 255, 255, 255. If I have RGB values of 128, 128, 128, I’ll send some other voltage. That is, roughly, what this gamma curve is telling us. If we have a neutral RGB value of 128, and the file is an AdobeRGB value, we’ll send the monitor a voltage of 0.218 Volts for each channel of that pixel. For a ProPhotoRGB file, we’ll send 0.287 Volts. [This isn’t quite true. The monitor will also have some look-up table (LUT) that corrects for its own peculiarities. I’m trying to simplify concepts here for those of you who know more of the practicalities.]

Here are the tone curves for LAB, AdobeRGB, sRGB, and ProPhotoRGB, on one graph (for your viewing pleasure).

tone curves

You can see that the LAB curve is the “highest”, the AdobeRGB and sRGB curves are nearly on top of one another, and the ProPhotoRGB curve is the “lowest”. Unfortunately, none of the commonly used RGB color spaces has a tone curve that quite equals that of LAB. It is, however, quite possible to design your own color space or gray gamma space to create one; and I have shown how to do this quite simply elsewhere on this site using Photoshop. The fundamental issue that I have with these color spaces is that, after you have gone to some lengths to get middle gray right in your camera, they go ahead and mess it up. I have recommended using a color space with ProPhotoRGB primaries and a 2.47 gamma tone curve elsewhere here. This shows the tone curve for such a color space superimposed on that of LAB.

LAB and gamma 2.47

The match is very tight. I will frequently work in such a color space and, just before printing, convert into AdobeRGB or GrayGamma 2.2, which are compatible with the Epson printer that I use. More of printing later though. It is a large topic.

Back to monitors as output devices… Monitor calibration is extremely important for an accurate photographic workflow. Your monitor is your window into the data within your image file. Again ignoring issues of color accuracy, for B&W work, it is critically important to get middle gray right on your monitor. What is the point of having metered for Value V in the field, set your desired exposure, and committed to getting the correct print density for that Value, if, when you check the file on your monitor, Value V in your digital image file does not map to Value V on your monitor? Only calibration can set a known tone curve for your monitor. And in my humble opinion, the only tone curve worth calibrating a monitor to is the L* curve. It has this wonderful property of getting 18% gray right.

For the record, I use DataColor’s Spyder3Elite to calibrate my monitor to an L* curve. Their latest version is the Spyder4Elite. Both support an L* workflow, which I prefer. (I get no consideration for this plug. I like the product. I assume that there are other L* calibration tools available. This is just the one that I’m using.)

In Ansel Adams’ day, you could calibrate your workflow by metering off an 18% gray card, exposing a negative for that Value on Zone V, developing it, and measuring the density. In a digital dark room, you can do the same thing, but now you check your digital file by “developing” it in some color space and rendering it on a screen. After all I’ve said about 18% gray so far, does it not make sense to develop into LAB or an RGB color space with a gamma 2.47 tone curve or GrayGamma 2.47, and then to view the image on a monitor calibrated to an L* tone curve? Our digital dark room simulates the physics and the chemistry of the wet dark room; why not get the physics right while avoiding the chemicals?

If you want, you can find an L*-RGB ICC working color space here. There is an interesting thread in which this and similar color spaces are discussed.

At the end of the day, a modern, digital Zone System satisfies a few simple criteria:

• Middle gray is the reference point and it is set at 18% of maximum throughout the workflow.
• The unit of measure is the stop.
• Equal changes in file Value correspond to equal changes in output light Value on either monitor or print.

Now, if only someone would put L* into a printer.