Most of what you read about the Zone System for digital is wrong

T. S. Eliot said,
“We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time.”

I have been looking over, yet again, material on the tubes of the Internet about the Zone System in the context of digital processing; and I have come to the conclusion that almost everything out there is not just wrong, but dead wrong. The essence of the problem is confusing the 8-bit binary range used in programs like Photoshop after RAW conversion into some color space with the dynamic range of the original RAW file. Authors on this topic seem to ignore the fact that the Zone System dealt with empirical measurements of how exposure impacted the negative and the print. That impact depended on the type of film, the processing, the range of light in the scene, among other factors. Most “digital Zone systems” are expressed in terms of a span of RGB values in, say, Photoshop; and are completely divorced from camera, sensor, ISO, RAW converter, and color space/tone curve. I first published the following blog post in December 2011; and I suppose most of what follows is about right. However, I’ve also gone ahead and published a set of pages here on the Zone System; and if this topic interests you, I heartily encourage you to read through that material.

While DSLRs have opened up a whole new range of photographic opportunities, there is much that has been lost from the days of the negative, the dark room and the print. Every piece of traditional film was a kind of HDR engine, with accelerated tone compression characteristics at the toe and shoulder of the density-exposure curve. Likewise, by pegging middle gray at an 18% reflectance point, the Zone system employed a feature of human vision as its reference value.

It seems instead that the digital art has been designed around the convenience or capabilities of manufacturers of monitors, sensors, and printers. The sRGB tone curve with its gamma of 2.2 has nothing to do with human vision and everything to do with manufacturing tolerances for not rejecting large percentages of computer screens. Its adoption in JPG has everything to do with a raft of uncalibrated monitors connected to the Internet for viewing untagged images. The common 12-13% gray reference point in most DSLR exposure systems has to do with the impact of overexposure in sensors.

Luckily, camera manufacturers have given us RAW files, which are reasonably accurate linear representations of the range of light in the scene from which they were captured. Properly treated, we can get 12 stops or more of dynamic range out of these either with pro-grade cameras or HDR methods or both.

If one of us moderns could take a D3x and stand beside Ansel Adams and take an image at more or less the same place and time that he did, we could walk away with a permanent digital record of the light that had exposed his film. Then, we could hop back in our time machine and come back to today. The question is, now that we had that digital record, what would we do with it in our digital dark rooms? Throw away 50% of the data in going to print? We do that all the time now and wonder why our B&W prints or digital images look gray and muddy.

Perhaps in several years the camera makers will create sensors that have the non-linear density-exposure curves of film. Perhaps they’ll give us a menu item to select the film emulation curve that we wish. Even a sensor with the Bayer filter thrown away and designed for B&W would be a step forward for some of us. For now, we have the choice of taking the linear RAW data or a JPG or TIFF with gamma 2.2. Naturally, I vote for RAW since it is a more or less accurate representation of the original light from the scene on a linear scale. I also vote for the more bits the better; 14-bit RAW over 12-bit RAW.

Now, there is a big debate about color spaces and their “volume”. Some say ProPhotoRGB is better than AdobeRGB or sRGB because ProPhotoRGB is bigger. In the case of B&W, this discussion is largely irrelevant because we’ll through away the color data in our end-game. Of course, retaining color data for the application of filters (Wratten 8 yellow or 25 red or whatever) or customizing a color response curve, is important in our middle-game. Having said that, the volume of the color space is much less important than the back bone of the color space; namely, the tone curve that the space collapses onto as color is desaturated to 0. Being a bit anal in this regard (Microsoft recognized this matter years ago and early versions of Word would spell-correct my surname, “Angus” by removing the “g”), I have made my own RGB color spaces using a gamma of 2.47, as well as a gray gamma 2.47 space. I have written at length about gamma 2.47 on this blog already. No point beating that horse any more.

For a similar reason, I prefer to work on Macs since their internal ColorSync intermediate representation space, a form of CIELAB, uses a gamma of 2.47 for mid-tones. This gamma value is derived from scientific studies of human vision and is the only one for which the classic 18% reflectance card yields a true 50% middle gray value. Likewise, I dislike Lightroom and ACR, as they force the use of gamma values derived for the convenience of the computer h/w and s/w industry; viz., 1.8 for ProPhotoRGB and 2.2 for sRGB and AdobeRGB.

I have read a lot of crap about the source of the relationship between 18% reflectance and middle gray in the Zone System. The truth is that it is the mapping from the CIEXYZ color space (and hence LAB as a derivative) and it is a model of human vision based on scientific data available by the 1920s and confirmed repeatedly since then. Adams, Kodak, and others adopted this reference on good grounds; and it will likely remain reasonable today.

While the data to support my observation can be found through many sources independently, anyone with the ability to solve the equation can work out what x has to be for 0.5^x = 0.18. The answer is

 x = log(0.18)/log(0.5) = 2.47393118833

While it might be thought that this only applies to a single reference point at middle gray, and we can pick any other value that might be convenient for other reasons, it also applies to the visual relationships between values away from middle gray and the reference point. In the case of the Zone System, it defines not just what reflectance corresponds to middle gray (Zone V), but also what reflectance corresponds to any other Zone as well.

An excellent portrayal of how the Zone System applied to real film and paper is given on p.40 of Film and Digital Techniques for Zone System Photography by Glenn Rand, available on Google books here.

Rand shows how the actual scene exposure maps first to the non-linear curve of the film, and then to the non-linear curve of the paper. Rand shows also that while Zone V is a common reference point for exposure, it is not the only useful one. But my main point is that the overall combination of negative and paper provide extra compression of the scene’s tonal range outside of Zones II – VII, as Rand notes.

This effect is not present in the typical digital workflow. We take a NEF file into Photoshop or Lightroom through ACR, mapping it in the process to sRGB or AdobeRGB with a gamma of 2.2 or ProPhotoRGB with a gamma of 1.8. This would naturally put middle gray at either 45.8% or 38.6%, respectively. This is either almost at or in Zone IV. So, we’re already off to a bad start.

But the main thing is that our working space is purely linear (in a log-log sense) and that it will be inverted exactly in going to print. There are a few notes that are worth making about the page or two that I referenced in Rand, and that are also apparent from Adams’ The Negative and The Print. First, the curves are given in terms of relative density, D, for which the equation is D = log10(1/Reflectance), for prints, or D = log10(1/Transmittance), for negatives. So, for 18% reflectance from a print, D = 0.744. This pegs the reference density for middle gray in the final print because that’s what the viewer’s eye will see.

Another thing worth noting is that Rand sets a threshold for Dmin for prints at 0.10. This is the boundary between Zones IX and X at the bright end of the scale. A little arithmetic converts this into 80% reflectance. For a modern print paper, reflectance certainly can be as high as 92 to 98%. In fact, papers with OBAs can have reflectances above 100% (according to ISO standards) like Epson Exhibition Fiber for which the ISO brightness is 111%. (These papers take incident UV light and reradiate it as visible light. So, there’s more visible light out than there was in.) At the other end, Dmax is established at the shoulder of the print curve as the boundary between Zones I and 0; and for typical photographic print papers this might wind up at 1.8 to 2.0. Arithmetic gives us reflectance values of about 1%. Modern quality ink jet papers and inks will deliver a full Dmax of around 2.4 or so. Kodak Azo paper had a Dmin of 0.1 and a Dmax of around 1.9, flattening out completely at 2.0.

So, there are 9 zones (9 stops of light) mapped from D=0.1 to 1.9, a span of 1.8. To convert this density range to stops, we divide this log10 scale to a log2 scale by dividing by log10(2)=0.3; and we get about 6 stops. Fascinating, tonal compression, eh. An original 9 stops in the range of light in the scene are mapped onto 6 stops of reflectance in the final print.

Adams’ The Negative includes an Appendix 2 containing extensive film test data for various films with various processing methods; normal, N+x, N-x, two solution, selenium intensification, etc. Generally, the range of densities achieved in normal processing is very close to this same span of Dmin=0.1 and Dmax about 2, still corresponding to about 6 stops of dynamic range in the negative. A certain degree of care had to be taken to get the range of light in the negative to map properly to the range of light achievable in the print. Likewise, in scenes with a more limited range of light, then N+ processing could be used to expand the range of densities in the negative; and so on. But let’s just consider the base case for now.

Think about it: Ansel Adams’ prints have a dynamic range of about 6 stops. Yet, he went about talking about his Zone System with a range from Zone 0 to Zone X (at least) with 11 stops in span. I’m exaggerating a bit, because there’s still Zones 0 and Zone X that I’ve neglected; but these are even more compressed than the middle 9. Nothing in the range of light in a scene would compress the span of intensity of light in these “zones”. That is a purely intellectual artifice.

Let’s go back to why I was saying that most of what you read about digital processing for the Zone System is wrong. The usual modern explication of Adams’ Zone system for application in Photoshop breaks the range of 8-bit values from 0 to 255 into a set of ranges. See for example, see Koren on his approach. Koren’s table for sRGB is as follows (as near as I can measure it on my screen):
Zone 1 – 0
Zone 2 – 28
Zone 3 – 50
Zone 4 – 80
Zone 5 – 118
Zone 6 – 162
Zone 7 – 207
Zone 8 – 243
Zone 9 – 255

In his books/DVDs, Welcome to Oz and From Oz to Kansas, Vincent Versace divides up the 8-bit range in Photoshop slightly differently. He starts Zone II at 7 and ends Zone IX at 247. This gives him some more “wiggle room” for Zones 0, I, and X.

But all of this misses an incredibly fundamental point. If you’ve just imported a 14-bit NEF into a 16-bit Photoshop working space, you don’t have 8 or 9 zones (powers of 2), you have 16. Koren, Versace, and it seems everyone else, are completely off-base on what the binary number system is. Just because the default histogram, curves, and info palettes in PS display 8-bit binary numbers by truncating the full 16-bit internal representation, doesn’t mean that the full span isn’t really there. They’re just ignoring it.

More to the point, these authors are confusing the fact that the binary number system is organized around powers of 2 with the gamma of the color space or gray space that an image is represented in. Put another way, I could represent the RGB values of an image in a fixed-point binary number representation, a fixed-point decimal number representation, or a floating point decimal representation. Changing the number system would not change the image. So, if the image is in an AdobeRGB color space or a gray gamma 2.47 space, the tone curve is not altered just because we write the numbers down in some different number system.

The same would be true if we were to digitized one of Ansel Adam’s negatives. If we digitize it into a fixed-point binary number system, we have not changed the characteristic curve of the original negative. Hence, just because our number system works in terms of powers of 2, Adam’s zones for the negative do not necessarily fall into powers of 2. In fact, this is extremely unlikely.

Just to be very clear about this, here is a graph of the density of Kodak Tri-X Professional film that I’ve taken from Ansel Adams’ The Negative, Appendix 2, p. 247.

Density of Kodak Tri-X Professional film, from Adams' The Negative

Density of Kodak Tri-X Professional film, from Adams' The Negative

As stated above, density, D, is given by D = log10(1/Transmittance). If I translate this into a curve of negative transmittance on an 8-bit binary scale versus Zone number, this is the result.

Transmittance for Tri-X Pro on a binary scale versus Zone

Transmittance for Tri-X Pro on a binary scale versus Zone

Now, it becomes more apparent that the scale in use is highly compressed. We see that Adams envisaged fully 12 zones compressed across the given binary scale, not 8. Of course, there is considerable compression in the highlights, which is to say, the region of almost zero transmittance (since we are dealing with a negative). The fundamental take-away is that a Zone of light is not (I repeat, not) a power of 2 on this curve. Again, that is to confuse a property of the number system with the set of numbers.

Let’s consider now the same kind of curve for a typical paper. I’ve taken this one from Adams’ The Print, Chapter 6, Figure 6-9.

Characteristic curve for typical photographic paper

Characteristic curve for typical photographic paper

Note that the x-axis is given in terms of “relative log exposure”, not Zone explicitly. This is because there is considerable leeway for the print maker to choose which Zone to make his reference point and how to map the negative to the print. The exposure axis is relative in the sense that the printer is free to choose a reference exposure time for, say, black. If the choice of 1 second is made, then “0” corresponds to that reference. Each increase by of 0.3 corresponds to a doubling in exposure intensity. Since the axis has 10 units of 0.3, this corresponds to ten “Zones”. However, it should be apparent that the linear part of the paper curve really only corresponds to a much narrower range; viz., from about 0.1 to 1.9 or a range of 1.8. In terms of stops, this is 1.8/0.3 = 6.

One choice for the printer would be to choose an exposure time and light intensity such that the Zone V region of the negative will print out at 18% density on the print; that is, at a density value on this curve of D = log(1/0.18) = 0.744. This would be a very natural choice. For this paper, the relative log exposure would be about 1.13 for Zone V then, which is a relative exposure of 13.5. If we had a light source of 75 units, and we were using a Tri-X Pro negative our Zone V exposure through an 18% transmittance would turn out to be 13.5, as necessary. If I take the transmittance values for the Tri-X negative and map them through to this paper, I get the following curve:

Print from Tri-X negative with Zone V reference

Print from Tri-X negative with Zone V reference

Now, I can translate this curve into an 8-bit binary scale for reflectance values for the print.

Print brightness from Tri-X negative by Zone

Print brightness from Tri-X negative by Zone

Let’s imagine that we wanted to achieve a similar effect by printing from an AdobeRGB or sRGB or gray gamma 2.2 file. We’d need to associate an internal 8-bit value from, say, Photoshop, to these values. By employing a gamma 2.2 value, we get the following mapping:

Photoshop values versus Zone

Photoshop values versus Zone

This graph seems just a little odd, doesn’t it? The blackest blacks are saturating at around a value of just under 40 or so. Even if we consider the best possible case; viz., a density of 2.1 for this paper and a gamma of 1.8, the saturation level for blacks comes out to be around 17. Pushing the Dmax value to 2.4, typical of modern inkjet printers and paper, the saturation value with a gamma of 1.8 is still only around 12. With a gamma of 2.2, the Dmax value inside Photoshop is around 21.

At the other end of the scale, we’re seeing saturation in the highlights at a brightness of about 80%, or 229 on an 8-bit binary scale within Photoshop with gamma at 2.2. While this may have been a reasonable estimate for the shoulder on traditional photographic papers, modern inkjet printers can do much better. As we’ve seen for papers using OBAs, even 111% is possible. For a baryta paper, like say, Ilford’s Gold Fibre Silk, the ISO brightness is rated at 96%, or around 245 on an 8-bit binary scale.

Of course, what this curve purports to be is just plain bogus. No one prints a photographic image in this way, or at least, no one who doesn’t have a digitized version of a film negative. I’m not saying that that doesn’t happen, but even then the approach doesn’t pass through a stage like this. However, isn’t it instructive to look at what one would have done in Photoshop in order to achieve a print of the sort that came out of a traditional dark room?

Let’s imagine then that we have a modern inkjet paper with OBAs and a brightness of 111%, like Epson Exhibition Fiber, and its Dmax is 2.4. Or, let’s say we have a baryta paper like Ilford Gold Fibre Silk with a brightness of 96% or Dmin of 0.018 and a Dmax of 2.3 instead. Assume that the eye works the same for both, that is, true middle gray is an 18% reflectance. What should we do for Zones?

The current thinking is to “linearize”. That is, we map the full span in Photoshop (0 to 255 in 8-bits or 0 to 65,535 in 16-bits) to the range of the paper and ink from Dmax to Dmin in as linear a fashion as possible. This range of Dmin to Dmax for the Ilford paper is about 2.3 or a span of 2.3/0.3 = 7.7 stops. Now, understanding Epson Exhibition Fiber paper in the same way is a little more difficult because its Dmin depends on the nature of the lighting. It’s one thing in bright sunlight with a strong UV component and another thing in incandescent light with little UV. If we go with the brightness of 111% then this corresponds to a negative Dmin of -0.05. This gives us a Dmax to Dmin span of 2.45 or 2.45/0.3 = 8.2.

Linearizing is what we do when we calibrate a printer. It is at the heart of Quad Tone RIP and Epson’s Advanced Black and White. Unfortunately, as Norman Koren says, it can create some sharp edges where we run out of linear range.

OK. So a modern inkjet paper in a professional quality printer with pigment inks will do two more stops than a traditional photo paper; eight stops instead of six. But what have we learned. Ansel Adams was printing negatives that had captured 12 stops of light on papers with 6 stops of range. Our pro-grade DSLRs can do 12 stops of light too, and our printing processes can do 8 stops. What is being missed?

I assert that we are mistaking the properties of the fixed-point binary number system used in programs like Photoshop for the specific properties of any given image capture. Suppose I have a 14-bit RAW file from a pro-grade camera. Let’s say this camera’s dynamic range is just over 12 bits, meaning that there is some random noise with a standard deviation of just over 1 bit in the data set. The maximum binary value is 2^{14} - 1 = 16383 and the minimum value is just 0. Let’s say that I (or the camera’s exposure metering system) are very good, and I capture an image that spans a range from 1 to 16,382. I’ve snagged 12 Zones of light in a 14-bit file. True middle gray (18% reflectance) is at 2949 on this scale of 0 to 16,383 in this hypothetical perfect exposure.

The simplest possible approach to mapping this 14-bit dataset into a 16-bit Photoshop file is just to multiply each pixel value by 4. The effect is precisely the same as dropping each 14-bit pixel value into the upper 14 bits of the available 16-bit record for the pixel in the Photoshop file. Now, I am completely ignoring algorithmic steps for demosaicing, noise reduction, sharpening, and et cetera. For my purposes here, they are all a “don’t care.”

One thing that is important is the application of the tone curve for the choice of the color space. For a space using a gamma of 2.2, this would map our original 18% reflectance value of 2949 in a linear 14-bit scale to the value of 0.18^{1/2.2}*\left[2^{16}-1\right] = 30058. If this value doesn’t seem familiar, it’s because we haven’t ignored its lower 8 bits. There are a couple of ways to do this, but a simple approach is to divide by 2^8 = 256, which yields 117.

Notice what we had to do in order to adopt the usual view in Photoshop using 8-bit values. We threw away 8 bits of the data.

In fact, in a 16-bit file, there are 8 stops before you get to the value “1” as displayed in 8-bit form. So, what has PS done when you import your 14-bit NEF from your D3 into a 16-bit file? It applies a scale factor of 4, and then it applies the tone curve for the color space you’ve chosen. In this way, saturated pixels go to the maximum value and black pixels go to 0. To consider this range of binary values only in terms of the most significant 8 bits is to do damage to its actual span.

Let’s take a look at the distribution of Zones. Consider first the following graph showing our 14-bit RAW file in which we associate 12 Zones with the most significant 12 bits of the file. The y-axis is showing the 14-bit value, in decimal notation, of the uppermost point of a Zone.

Twelve Zones of dynamic range in a 14-bit RAW file

Twelve Zones of dynamic range in a 14-bit RAW file

If I translate these Zone values into 16-bit numbers in a gamma 2.2 color space, I get the following graph of values versus Zone.

16-bit Values in a gamma 2.2 space versus Zone

16-bit Values in a gamma 2.2 space versus Zone

Of course, this curve is not particularly useful in this format; although it does show the distribution of “zonal” or “tonal” values for our pro-grade 12.3-stop dynamic range DSLR once we get a RAW file into Photoshop in something like AdobeRGB. A somewhat more natural way to look at these 12 Zones is in the usual 8-bit scale. Let’s try that.

Zones versus 8-bit values in a gamma 2.2 space
Zone Number 8-bit center value
12 255
11 217
10 159
9 116
8 85
7 62
6 45
5 33
4 24
3 18
2 13
1 9
0 7

Note that this table has little if anything to do with powers of 2 in Photoshop’s 8-bit scale; rather, it is derived from powers of 2 in the original RAW file. In showing this table, I am not, (I repeat not) trying to say that this set of numbers is a generally correct set of values for Zones. It is, rather, a correct set of values for a 14-bit RAW file for a properly exposed image from a camera with a better than 12-stop dynamic range; e.g., a Nikon D700 or D3 that has been converted into a color space using gamma 2.2. Convert the image into, say, ProPhotoRGB with gamma 1.8, and the numbers change. In short, this is not about Photoshop, it is about a kind of image from a kind of camera, just like the original Zone System was about a kind of image on a kind of film.

Another thing to consider is that the classic Zone V center-value of 50% gray falls in Zone 9 on this scale. I have purposely used an arbitrary numeric range so that my numbers are not to be confused with the Roman numerals used in the “real” Zone System. Here, in the assumed gamma 2.2 space, the center value is nominally 116. This is an issue for any model of the Zone System. If middle gray is an 18% reflectance, then one stop up (Zone VI) is 36%, two stops up (Zone VII) are 72%, and three stops (Zone VIII) up are 144% for which we need a direct source of light or a specular reflection.

This is not as difficult as one might imagine. I live in Colorado and yesterday we had snow. Today is beautifully sunny. If I meter off an 18% gray card in direct sun, I get an exposure of 1/125s at f/13 and ISO 100. Off the side of our white house or the nearby snow, I get 1/1000s, 3 stops up. If I meter off direct specular reflections off some melted water, I get 1/2000s, 4 stops up from middle gray, or Zone IX. This is exactly consistent with Adams’ descriptions of his Zone System in The Negative, Table 2.

If I wanted to set the exposure on my camera to capture this entire range, I’d have to choose an exposure such that I would not blow out the highlights. If f/11 and 1/2000s puts these at middle gray, then I should go up by about 3.3 stops to 1/200s at f/13. At this exposure, the middle gray reference will be darker by 2/3 of a stop. This makes a point often made that DSLRs are less forgiving in the highlights than traditional film. By exposing for highlights with a DSLR, middle gray will almost always come out at a lower exposure value than in the case of film. Conversely, exposing for middle gray will often yield blown highlights with a DSLR.

Those experienced in handling a DSLR will turn on the “blinkies” in their display and also check the histogram. If the metering system’s exposure system yields those pesky blinkies, then we dial in some exposure compensation or switch to manual in order to just knock them out of the picture. In doing this, we push middle gray down by the equivalent amount. Say for the sake of argument I captured this image. My Zone 12 has now been mapped to Zone IX of Adams’ schema. Five stops down, my Zone 8 now maps to Adams’ Zone V. Likewise, my Zone 4 maps to the classic Zone I. This particular model has pushed middle gray down by one stop.

Is there anything that we can do to remap this exposure so that middle gray turns out at the right place? In other words, can we translate this file structure to something closer to a traditional photographic print? The answer is ‘yes’ and the tool is a curves layer in Photoshop. The mapping can be understood from a couple of key points on the curve. First, we want to shift Zone IV up to Zone V. This is easiest to do in the LAB color space in Photoshop. Without going through all of the math, we are now using a gamma of 2.47; and the shift we want maps 37% luminosity to 50%. Next, we apply another curve with what’s called a “sigmoidal” shape. This adds mid-tone contrast and flattens out the range of highlights and shadows, as in traditional film.

There is a kind of game that is similar in principle to the compression and expansion of tonal range that can be played in the digital dark room.

Start with an image I did a couple of years ago, a 3-shot HDR using my old D80 in the Colorado River canyon near Glenwood Springs:

Initial HDR image in ProPhotoRGB

Initial HDR image in ProPhotoRGB

Next, I apply a 1-stop upward curve to get the mid-tones in the right zone.

After shifting mid-tones up by one zone

After shifting mid-tones up by one zone

Now, convert to B&W using the high contrast red filter preset in Photoshop.

After B and W conversion in Photoshop with red filter

After B and W conversion in Photoshop with red filter

Typical wishy-washy sort of image. At this point, it’s in a gray gamma 1.8 space.

Now here’s the trick, we assign it to a gray gamma 2.47 space. I have shown elsewhere in this blog how to create such a space.

After assigning to a 2.47 gamma space and conversion to sRGB

After assigning to a 2.47 gamma space and conversion to sRGB

Now, adding a curves layer with a slight S-curve and pulling down the black level, we get an image with some serious punch, and yet there isn’t a blown highlight or a blocked shadow to be found.

After applying sigmoidal "S" curve layer

After applying sigmoidal "S" curve layer

Now, let’s apply some Nik Tonal Contrast.

With Nik Tonal Contrast

With Nik Tonal Contrast

Now, let me say that the small images here don’t really do justice to the original files. But you get the idea. Next, the major step is a trick that mimics the push processing of the traditional dark room. It can go the other way too, by converting to a higher gamma space and then assigning to a lower gamma space. One can also create a duplicate file, assign it to the alternative space, and then use Photoshop’s Apply Image function to achieve various blending effects.

I didn’t invent this trick. I adopted it from Dan Margulis in his videos on Kelby Training and elsewhere. I just had an epiphany one day as to what the trick might mean. In effect, it is equivalent to increasing contrast (assigning from lower to higher gamma space) or pulling (assigning from higher to lower gamma space).

I’d also like to go back and mention Norman Koren’s work again, especially in case he ever reads this. I think Mr. Koren understands what I’ve posted here, as evidenced by this post of his. This article displays a deep understanding of the essence of the dynamic range of a camera, in general, or of a specific digital image file. Yet, Koren continues to associate the Zone System with the numeric range by the 8-bit level span in Photoshop as here. While I suppose it could be argued that beginners require a simplified introduction to the notion of the Zone System, it should be understood that Zones are about exposure in the image and density in the print and not about binary numbers in Photoshop. This would be very apparent if, for example, Photoshop used floating-point decimal numbers to represent pixels instead. If Photoshop had used decimal numbers to begin with, I think it would have been far less likely for folks to struggle like this.

I have also seen other people wrapped around the axle with the powers of 2 starting with middle gray (Zone V) at 18%. Of course, the sequence goes 18%, 36%, 72% and then the fatal 144% for Zone VIII. What do we make of this? Of course, for the physics of light in any given scene, this is not a problem. It is quite likely that any sunlit scene can and will have intense sources of light that exceed that coming off of a “normal” reflective surface. Shoot straight at the sun through some tree branches or capture a scene with snow and running water reflecting direct rays of the sun, and you’ll have such a scene. Or, in the studio, accidentally capture one of your strobes in the image, and you’ll also have this situation.

Of course, the thing about the fatal Zone VIII is that it is, well, a zone, a range. If the center of Zone VII is 72% and that of Zone VIII is 144%, then the harmonic mean for their boundary is at 96%. I offer the harmonic mean as the correct method of calculation since we are dealing with ratios of light. Of course, we can just barely represent Zone VIII on a modern inkjet paper if that paper has a brightness of 96% or better. That would be a pretty borderline issue.

But notice this; if you go back to the curves for photographic films and papers I’ve shown above, you’ll see that in the linear parts of the curves the distance between Zones is typically around 0.2 instead of 0.3. Now, 0.3 is the logarithm in base 10 of the number 2. If a Zone was really being represented as a power of 2, we should see 0.3 as the average step between Zones; but instead we have this compressed value of 0.2 instead. And, the inverse logarithm of 0.2 is 1.58, not 2.

I offer that this is incredibly important. That’s how Adams and the rest of the film gang managed to get 10 or more Zones onto a negative and then a print; namely, they were compressing the tonal range of the original scene complete with its relative values in excess of 100% with respect to middle gray. Here’s another piece of relatively simple mathematics. If we take middle gray at 18% and go up by three logarithmic steps of 0.3, we get log(0.18) + 0.9 = 0.155. The inverse log of this is just 144%, as expected. However, if we go up by three logarithmic steps of 0.2 instead we get 72%.

Dig it. Adams’ Zones were not going up by powers of 2 in the negative or in the print. They were going up by powers of about 1.58. Well, that’s with normal processing of the negative. As shown in Appendix 2 of The Negative, with push processing (N+x) or pull processing (N-x), different slopes of the characteristic curve could be achieved in order to expand or contract the slope.

It is the achievement of a similar result with digital processing that we must strive for.

Leave a Reply