# Printing

I have already begun the discussion of printing in some of the previous pages in this sequence on the Zone System, but now it is time to dive in with both feet. In some ways, it might have been better if the originators of the “Zone System” had called it the “Value System” instead. The notion of a Zone is all about exposure, and, as I’ve mentioned repeatedly in this sequence, the intellectual process at the time of capture is, first, about choosing a scene Value to map to an exposure Zone, and second, planning how the results of this mapping will be handled in overall processing on the way to making a print.

In this way, one can consider the Values present in the original scene. These are the luminances that are reflected or sourced from the various elements of the scene, and they deliver a certain range of light. Once we capture an exposure, the RAW image file consists of pixels with certain digital Values. These digital Values are an array of binary numbers; they may be 12-bit or 14-bit numbers (or something else) depending on the DSLR and its settings. In the days of film, instead of abstract file Values, the photographer would instead have a negative, that could be developed with some form of processing or other (N+2,… N, … N-2, etc.) to control global aspects of the image, such as contrast and tonal compression or expansion. Now, digital photographers choose a color space to work in, and at the time a RAW file is imported into an editing program like Photoshop, through a RAW converter like Adobe Camera Raw, each pixel’s RAW Values are mapped onto new Values that depend upon the color space’s tone curve and color primaries. Like the original RAW file’s pixel Values, these pixel Values in the Photoshop file (or Lightroom or Aperture or Capture One or…) are abstract and internal numbers. One can look at these with “Info” panels or color pickers or other tools within the program, but when we look at the image on our screen in a program like Photoshop, we are not seeing those Photoshop file Values directly, even though it might look like that.

Something else is going on. Let’s say we have a neutral Value in a Photoshop file of 128, 128, 128 for some pixel or set of pixels. Perhaps we made this set of pixels ourselves by filling a square with that Value of gray. But when we see this on our screen, we don’t see a square with little “128”s floating there; we see gray pixels. Our screen is emitting some amount of light relative to its maximum within the boundaries of this square. Some part of the computer’s electronics is putting some voltage to the red, green, and blue pixels within that boundary. Hence, some functions in the computer have been called upon to choose that set of voltages for red, green and blue based upon the Photoshop file’s “128” Value. But we have already learned that what this 128 Value means is different for files with different color spaces. I invite you to check this out for yourself. Make a new Photoshop file in sRGB and make a square filled with 128, 128, 128. Now convert that file to AdobeRGB and then to ProPhotoRGB and see what Photoshop tells you the Value is. For the record, the AdobeRGB result is 127; and the ProPhotoRGB result is 109. However, the little gray square looks the same. The voltages being applied to the screen pixels haven’t changed (or at least they shouldn’t). You can check this out yourself with utilities like Apple’s DigitalColor Meter app to confirm what I’m saying.

In other words, what you are seeing on your screen is yet another interpretation of the Values in the Photoshop file. Sometimes “128” means one thing. Another time, it means something else. To check this out, make two files, one in sRGB and another in ProPhotoRGB. Fill a square in each with a gray with Values of 128. Set up your screen to check them out side by side. Here’s what this looks like on my screen:

Neutral gray 128

The native RGB Values for my monitor are “135” for the sRGB gray and “154” for the ProPhotoRGB square. These correspond to L* values of about 53 and 60 respectively, and my monitor is calibrated for an L* curve. The screen shot that I took and that is included above, was set up as a JPG file; hence, it is tagged with an sRGB tone curve. If your screen is uncalibrated, or calibrated for sRGB, then the Value being presented to the pixels on your monitor will be closer to the original file’s “128”. Most uncalibrated screens will be close to an sRGB curve, as set by the manufacturer. So you see, what is sent to a screen depends on a combination of the pixel value in the file, the file’s tone curve, and the monitor’s tone curve.

If you were doing a lot of image work for Internet presentation, you might want to calibrate your monitor for the “average” Internet monitor; namely, a white point of 6500K and an sRGB color space. That way, you would see your images the same way that most viewers would see them. If your working space was also sRGB, then you would have the happy correspondence that the RGB Values in your Photoshop files should equal the voltages values going to your monitor when your computer renders images.

If instead, you are doing more print work, you might want to calibrate your screen for a 5000K white and an L* tone curve. You would then be closer to the viewing conditions of a print. Screen rendering becomes a simulation of the print process, to a greater extent. If your image file working space also has an L* curve; for example, either LAB or LStar-RGB, then you have the happy correspondence that the RGB Values in your Photoshop files should equal the voltages values going to your monitor when your computer renders images, just like in the previous case.

On the other hand, if you calibrate your screen for sRGB (or don’t calibrate it), and you work in AdobeRGB, and you view your prints under ordinary incandescent lamps, you are going to have a serious mis-match between Values at every step of your workflow. Your whites will not agree, your tone curves will not agree, your colors will not agree. Your prints cannot possibly look like your screen images.

As I have been emphasizing, luminance Values in scenes are mapped by the camera to Values in RAW files which are mapped in RAW conversion to Values in working files with working color spaces and these are mapped to ink Values by the printer which are then mapped to luminance Values when viewing the print by the combination of lighting, paper, and ink density. This should be called a Value System and not a Zone System, as I say. Value to Zone mapping happens at one stage of the entire chain.

We might say that Value to Voltage mapping happens as a kind of side-line event during this sequence. That’s when we visually render the image file on a monitor during post processing. Assuming that a print is the final outcome of the process, we want this Value to Voltage mapping to simulate viewing the final print as closely as possible. The first lesson in printing is not about printing, it is about monitor calibration.

There are many systems out there that can calibrate your monitor. It is often written that using any one of them is better than not using anything at all. I am not of this opinion, if you are interested in printing yourself. If your primary interest is delivering content to the Internet, then yes, get any one and use it. Calibrate your monitor for the default standard (sRGB, D65) and go to town. If on the other hand, you want to print, I’d say that you should work predominantly in either LAB or LStar-RGB (or make your own RGB color space or gray gamma space as I describe elsewhere), calibrate your monitor to an L* profile, and calibrate your printer.

What will all of this do for you? Value V in a working file will be Value V on your monitor and this will be value V in your JPGs as well. Take a look at this screen shot:

2 level screen shot, calibrated

It may be a little confusing at first blush. It is a screen shot of a screen shot. The “inner” screen shot is of a square square in Photoshop with a working color space of LStarRGB. The square was filled with 128,128,128. As the Info panel shows, its LAB L* value is 50. The DigitalColor Meter app is showing the corresponding screen value in LAB mode, and because I’ve calibrated my screen to an L* profile, the native screen value is very nearly equal to the Photoshop file value. There is some adjustment to get my white point right. Next, I took a screen shot of this situation and opened it in the Preview application. I then sampled the value of this JPG file, again using the DigitalColor Meter app. Once again, I did a screen shot of this. That is what is displayed above. It shows that, not only was the value that I saw on my screen while using Photoshop consistent with Value V, the screen shot that I made also achieved a consistent Value with white point adjustments.

Now, for grins, I turned off my screen calibration and just used the built-in iMac screen profile. I then set up a ProPhotoRGB (because I love love love big color gamuts) and mistakenly decided that a neutral 128,128,128 gray was middle gray. (Actually, Photoshop would make precisely this mistake if you asked it to fill a surface with 50% gray. That is, Photoshop always uses 128,128,128 for 50% gray no matter what the color space is.) I displayed the results and took a screen shot. Then, I opened the screen shot and checked the results once more. Here they are:

2 level screen shot, uncalibrated

First, you’ll notice that there’s no attempt to adjust neutral values for any white point. Also, there is no obvious relationship between the L* value of 61, the Photoshop value of 128, and the screen value of 139 (that corresponds to an L-Value of 54.5 by the way). However, with my screen calibration turned back on, here’s what I see in Photoshop running that same ProPhotoRGB file:

ProPhotoRGB capture

I trust that you appreciate this result. Now, what is displayed on my screen is the correct L* star value: $155/255 = 61$. In other words, what I am seeing on my screen is showing my what is true about ProPhotoRGB; namely, that 128 in its tone curve is not at Value V! It is considerably higher than L=50 by about 1/2 stop or so.

Suppose I was working in ProPhotoRGB on my images in Lightroom, because that’s all that Lightroom lets me do. Suppose I do not calibrate my monitor. But my eyes know what Value V is. My eyes are calibrated. So, I edit my files and I get my mid-tones smack where I want them by editing my images beautifully in Lightroom. Then I print them on my uncalibrated printer. Value V in ProPhotoRGB should really be about 100 (out of 255). Do you see an opportunity here for getting print values that show up darker than screen values?

I hope that you do. Lesson one in printing is calibrate your screen. Better yet, calibrate it for your eyes.