Ink density, reflectance, and L*

In my last several posts, I’ve been using some mathematical relationships between ink density, reflectance, and L* that I should make explicit. I’ve included the set of references along the way; but it might be worth including these all together in one location.

The notion of density goes back a long way in photography and has been used to refer both to the transmittance of negatives as well as the reflectance of prints. In those cases, certainly for B&W photography, the transmittance of a negative and the reflectance of a print had to do with the amount of silver halide remaining and its absorption of light. In the case of a negative, more silver halide meant less light transmitted through to photographic paper in the creation of a print. In the case of the print, more silver halide meant less light being reflected back from the surface of the paper. In the case of the print, we are only considering diffuse values as opposed to any specular highlights from a gloss or luster surface.

In the modern world, density is now used to refer to the same notion of reflectance of light from a print; but now that is assumed to be due to the presence of ink (either dye or pigment) usually laid down by an ink jet printer. Hence, we get the term “ink density”. In fact, the physics of the situation is completely unchanged. It doesn’t matter whether the cause of reduced reflectance is ink or silver halide or a coffee stain on the paper; something on the paper has reduced its reflectance.

For each kind of paper, the reflectance will be some maximum value before ink or coffee is applied. There are different rules or techniques or standards for how this is measured; but a pretty typical notion is to illuminate the paper with some combination of red, green, and blue lights and to measure how much of each is returned from the surface. The response is weighted in some way representative of human vision, and a net result is reported. One could imagine other ways of proceeding; but each would have some issue with the concept of incident “white” light. By referring the whole problem to a standard model of human vision, the notion of what “white” is can be dodged. The standard model of human vision is incorporated in references like CIEXYZ and CIERGB and CIELAB, and is based on a tristimulus view of color perception. That is, human color perception is assumed to depend on a weighted mix of three spectral filters that are dominated by wavelengths that center on what are generally called “red”, “green” and “blue” light. Of course, there’s nothing about any specific wavelength of light that makes it “red” or “green” or “blue”. In fact, in real human vision, any given photon might excite one of the cells in the retina that is associated with any of these three colors. It comes down to statistics: it’s more likely that a photon at the lower end of the range of vision will excite a nerve impulse that we associate with “red” than it will “blue” but that’s not a certainty. It’s just likely.

I use a Spyder3Print SR device for the measurements that I report here. I have checked that the equations that I’m reporting here are consistent with the results that it reports for density and L*. Since reflectance is an intermediate parameter between these two, I assume that I’ve got this right. These measurements turn out to be as consistent for ink jet prints I’ve made myself as for my X-Rite ColorChecker Passport or books of B&W photographs. There are a variety of devices that can be used to measure density, L* values, and the like. The topic is called “Colorimetry“; and the Spyder3Print SR is a spectrocolorimeter, meaning that it calculates tristimulus values. What the Spyder3Print SR reports though are LAB values and density values. By definition, these are diffuse values; and they can sometimes yield anomalous results. For example, the L* value for a glass cup may be around 1-2, meaning totally black. Of course, we don’t see glass cups as black because they transmit light from behind them and have specular highlights. But as far as their diffuse reflectance is concerned, it’s almost nil. Similarly, a metallic surface like a tin can or knife blade may have L* values from 30 to 40 and show up with some unique color component that we never register. In practice, the power of the diffuse LAB values of the metallic surface is dominated by specularity. My pocket knife’s blade comes out a pretty shade of fuchsia, really pretty in fact. I never see that; specularity wins over diffuse values in almost every case with metal surfaces. However, if I completely darken the room, get the blade in total shadow, and then hold the Spyder3Print device just back away from the surface, I can just “see” the combination of colors from the surface blending in that odd fuchsia way. Who knows? Maybe I’m confusing myself with an expectation based on the measurement.

Anyway, density as reported corresponds to a notion of reflectance of light that has been corrected for color; and in this way, it is consistent with L* in the CIELAB model. Formally, density is just a log10 representation of reflectance, so
 D = -log10(R)
where D is density and R is reflectance. By itself, “reflectance” doesn’t mean much until we consider what is being reflected. In the context of photographic prints, reflectance is referred to the same battery of concepts of human vision as occur in colorimetry or sensitometry. That is, the light being reflected is referred to a tristimulus model of color as defined in the relevant CIE standards. For conversion between D and L* the intermediate color space is CIEXYZ.

The equations for conversion from CIEXYZ to CIELAB and back are as follows:
 L^*=116f\left(\frac{Y}{Y_n}\right)-16
 a^*=500\left\{f\left(\frac{X}{X_n}\right)-f\left(\frac{Y}{Y_n}\right)\right\}
 b^*=200\left\{f\left(\frac{Y}{Y_n}\right)-f\left(\frac{Z}{Z_n}\right)\right\}
where
 f(t) = \begin{cases}<br /> t^{\frac{1}{3}}, & \text{if  } t>\left(\frac{6}{29}\right)^3 \\<br /> \frac{1}{3}\left(\frac{29}{6}\right)^2t + \frac{4}{29} & \text{ otherwise}<br /> \end{cases}</p> <p>

Going the other way, we have
 Y = Y_nf^{-1}\left(\frac{1}{116}\left(L^*+16\right)\right)
 X = X_nf^{-1}\left(\frac{1}{116}\left(L^*+16\right)+\frac{a^*}{500}\right)
 Z = Z_nf^{-1}\left(\frac{1}{116}\left(L^*+16\right)-\frac{b^*}{200}\right)

where
 f^{-1}(t) = \begin{cases}<br /> t^3, & \text{if} t>\left(\frac{6}{29}\right) \\<br /> 3\left(\frac{6}{29}\right)^2\left(t - \frac{4}{29}\right) & \text{otherwise}<br /> \end{cases}<br />

In this model, the values for X_n, Y_n, and Z_n are the CIEXYZ values for the white point. The clue for how to get density into this set of equations comes from the relationship between L^* and Y; that is, L^* depends only on the ratio of Y to Y_n. Even though the Y value corresponds essentially to the “green” part of illumination, the spectral filter that underlies the Y value covers the entire visual spectrum. Hence, we simply associate “reflectance” with this ratio:
 R = \frac{Y}{Y_n}

With that substitution, it is easy to compute an L* value for any given D value. By using the ratio given above, we have automatically compensated for the white point of the illumination as well as its intensity. For example, say we here that someone has quoted a Dmax of 2.4 for some combination of inks and paper. This works out to an R value of 10^{-2.4} = 0.00398. This is less than the value of \left(\frac{6}{29}\right)^3 so we use the second equation above and get 3.6 for L*. On the other hand, if Dmax were 2.0; L* works out to be around 9. And so on…

For our photographic “one meter” at a diffuse value of 18% reflective for middle gray, we get 0.74 D, and L* of 50.

At the other end of the scale, let’s say someone has quoted a paper with a reflectance of 90%. We expect an L* of 96 and a Dmin of 0.046. Take the reflectance up to 96% and L* becomes 98. And so on…

Finally, here is a shot of a Lastolite Ezybalance 18% gray target properly exposed at middle gray. Just for kicks.

Lastolite Ezybalance 18% gray target properly exposed

Lastolite Ezybalance 18% gray target properly exposed

Since this shot has been tagged in an sRGB color space, the target should show up at around RGB values of 120 or so, consistent with sRGB’s gamma of around 2.2.

6 Comments

  1. Peter Miles

    Thanks Allen
    A I found this a very useful post for what I am wanting to do.

    However after attempting to implement it in an excel spread sheet, the equation you provide above did not give the answers for the example densities you gave in your text.
    After further checking there I found that one of the constants in equation you provide has the denominator and numerator transposed.

    in the equations for conversion from CIEXYZ to CIELAB.
    you have..
    f(t)={1/3 x (6/29)^2 x t + (4/29)}
    it should be
    f(t)={1/3 x (29/6)^2 x t + (4/29) (wikipedia)

    When I transpose them, then my excel spread sheet gives the same results for L* as the specific examples you give in your text.

    Regards
    Peter

  2. Peter,

    You were absolutely correct, as should have been obvious to me when f(t) and its inverse had the same fraction. I’ve got that corrected now. My bad.

  3. Dear Master

    I only have a CGATS file that includes CMYK, Lab and spectra values.

    Q1)Can I use these values to calculate Status-A, E, M, T, type1, type2 and Visual values?

    Q2)Can I use these values to calculate dot gain value?

    Q3)where can I find these formulas for Status-A, E, M, T, type1, type2 and Visual?

    Thanks for helping

  4. Bruce Linbloom has made available downloadable spreadsheets that perform these calculations. You can find them here. You will note that there are many dependancies in the calculations, including the nature of the illuminant, whether you’re dealing with an emissive (screen) or reflective (print) surface, your working color space, and so on. Lindbloom’s web site also includes background math concerning these values. As to calculating dot gain, that tends to be an attribute of a given printer, ink, and paper combination. Of course, 20% is a broadly used rule-of-thumb. For any given printer, ink, and paper combination, a way through the prediction of density is simply to calibrate; that is, measure, the actual results and compensate for them.

    Back to your CGATS file, there are different possible formats, ASCII keyword-value pairs or XML, for example. Either version is readable by humans as well as computers. To employ Linbloom’s spreadsheets though, you will have either to retrieve data either manually or automagically from your CGATS file and paste it into the spreadsheet locations as appropriate. Another approach would be, obviously more complex, to come up with an application that read your data and applied the calculations as embodied in Lindbloom.

  5. Fiachra

    Hi Alan, just found your site, wouldn’t normally leave a negative comment but in this case I feel it’s important. The paragraph “Now we’re getting somewhere” under “How light falls off” is flawed. The reason you gave for not having to modify camera settings when moving further from a metered subject is incorrect. The subject being photographed becomes the light source and as such will obey the inverse square law (approx. as it’s not exactly a point light source). It’s true that the further the camera gets from the subject less of its light will hit the sensor. What’s also true is that the subjects relative size also changes with distance and does so at the same rate as the light fall off. It’s for this reason that the camera settings remain unchanged. It’s only the metered subject that’s guaranteed to be properly exposed in the resulting photo, irrespective of the camera to subject distance. Any additional objects entering the frame as you move back have not been specifically metered for and as such there’s no guarantee that these will be correctly exposed.

  6. I agree with your comment, by and large. As you say, any subject of finite size will eventually approximate a point source as you move sufficiently far away. For the sake of discussion, and without losing any generality (I think), imagine that it’s been metered as a 50% gray. Here is the basic question, if we pull the camera back, do we or do we not have to readjust exposure? More specifically, do we have to alter camera exposure to account for the inverse square law of light? The short answer to that question is that in almost all practical cases, the answer is no. Obviously, as soon as I say that, one could introduce counter-examples; but I believe that those counter examples prove the point.

    For example, in stepping back from the subject, one could introduce a mirror that was directly reflecting sunlight into the camera lens. Or perhaps one could introduce a light bulb or other direct source of illumination. It would be natural to expect that, if an image was captured under the original exposure settings then those specular or direct lighting sources would be overexposed in the resulting image. Still, the subject would be properly exposed. If we did alter the exposure settings of the camera in order to account for some anomaly (like a naked light bulb that came into view) we would then be underexposing the original subject.

    Why does this work this way? What I claimed is true under the assumption that all of the objects that come into view are scattering surfaces with a common source of illumination. This assumption is true under a wide variety of photographic situations; but certainly not all of them. If the assumption is true, then the situation is much like the case of an infinite plane scattering surface illuminated from an infinitely distant point source of light. As the camera is drawn away from such a plane surface, the reduction in light on the camera sensor from the originally visible part of the plane is exactly compensated for by the newly visible parts of the plane that have come into the field of view.

    That would be exactly true if the scattering surface of the plane was completely uniform; but even if there are variations in the amount of light reflected by the infinite plane, the same result will hold. Assume that the surface was covered with a number of squares with 0%, 50% and 100% reflectivity. Set the camera exposure for one of the 50% reflective squares and then pull back from the surface. The exposure settings for all three squares will remain correct no matter the distance from the surface. Again, there are boundary limits to what I’m saying. If you meter the 100% reflective surface as a 50% gray, then the other two sets of squares will be underexposed. Also, if you pull back far enough that the camera’s sensor can no longer distinctly resolve the individual squares, the whole surface will look like a 50% gray, assuming that the relative number of each of the squares is the same.

    Frankly, the physics of the situation would be true even if we were talking about a finite plane surface out in deep space. Until the camera is withdrawn far enough away that the image of the plane focused on the sensor falls on only one pixel, the inverse square law does not come into play. (I admit I’m playing a little fast and loose here with what is basically the 2-dimensional impulse response of the entire imaging system, including lenses and anti-aliasing filters, but work with me for simplicity’s sake.)

    Even the finite plane surface can be visualized as a 2-dimensional array of point sources by Huygens’ Principle. If you look at the second image on that Wikipedia page, you’ll perhaps be able to see what I’m talking about here. The set up is a little different than what I’ve described in the sense that it shows the transmission of plane waves of light through an aperture, and I’m talking about the reflection of plane waves of light from a plane; but the two cases are exactly physically duals of one another. Imagine a light detector that is very small with respect to the size of the aperture placed right in the middle of it. Depending on the intensity of the incident plane waves, it will provide some reading, say E0. That intensity reading will remain the same as the light detector is pulled back down from the aperture until the point that its own aperture size becomes large relative to the edge diffraction effects of the aperture. After that, the light intensity reading will fall off as the inverse square law.

    Assume for a moment that the subject is nothing more unusual than someone’s face. Naturally, that is not a plane surface; there are 3-dimensional features and the reflectance is not uniform. Even acknowledging those details, if the camera is placed well enough back from the subject, the distance variations of, say the nose versus the eyes, become second order effects. Imagine the subject’s face is, instead, replaced by a back-lit transparency of the subject’s face. We now have something that is almost exactly a transmissive aperture but with the qualification that each point on the transparency has a transmittance that is inverse to the density of the coating of the transparency. If we get the intensity of the back-light correct, the light coming through this artificial face would be indistinguishable from the light reflected from the real face; and we could easily trick the camera in this way.

    If you think back to that image in the Wikipedia article that I referenced, all that the film does at the aperture is to modify how much light is being transmitted through each point of the aperture. It does not alter the basic Huygens’ model of wave propagation. If you imagine a camera’s objective lens fairly close to the aperture, and the image of the transparency focused on the camera’s sensor, you can perhaps visualize how the intensity of light falling on any given pixel on the sensor that corresponds to any given feature of the transparency (or the real face) does not change with the distance between the lens and the aperture until the camera has been removed so far from the aperture that only a single pixel is illuminated by it.

    That limit is roughly the situation with the light captured from distant stars. Even though they may be objects larger than our sun, they are so far away that they are effectively point sources. The only reason that the light of a star would ever spread across more than one pixel in a camera sensor is because of limitations in the optics of the telescope.

    I know that I am prattling on about this topic, but I think that it is very important to grasp how a finite array of point sources in a plane is a reasonable approximation to most photographic situations and, hence, why the concept of the inverse square law of light from a point source in a vacuum rarely applies to exposure settings in most real world photography.

    Back to what you mentioned about how new objects coming into the field of view of the camera may not be properly exposed. Well, let’s think about that a little. Imagine that I set my exposure against a 50% gray card that was placed at or near my original subject; and I’ve entered those settings into my camera. Let’s agree that if a capture an image at some reasonable distance from my subject, and if all of the other elements in the camera’s field of view are simple diffuse reflectors, and if there is essentially one source of illumination (or at least a co-ordinated set of lighting elements that have some intentional balance), then all would be well with this original exposure. Now I start backing up and other elements come into the camera’s field of view. Well, what happens next depends a great deal on the situation. If it’s an outdoor scene with natural sunlight, and with no specular highlights, then I have a great deal of leeway to move backwards while never touching the exposure of my camera. OTOH, if changing my position starts to bring in banks of cloud that are reflecting the sun’s rays, well there’s a good chance that those new highlights will be blown out. That doesn’t mean that the original subject won’t be properly exposed in an image captured with those clouds, but the clouds could be blown out. So I think it depends on what we mean be whether or not the new elements in the camera’s field of view are “correctly exposed”. I could say that the limited dynamic range of the camera is responsible for the clouds being blown out when I properly expose for the original subject, but I’ll admit that the captured image would be less than ideal.

    But then I’d argue that we’ve stopped talking about proper exposure of the original subject, as such, and we’ve started to discuss how the range of light in any given scene can exceed the dynamic range of a camera’s sensor; and naturally, what we can do in order to capture beautiful images in spite of that limitation. Without being exhaustive, we could switch to an HDR method, we could use flash to illuminate the subject, we could expose for the highlights and recover the subject with a curves adjustment in post processing, and so on. Still none, of those corrective approaches to balancing out specular highlights or direct illumination is about an inverse square law of light fall-off from our subject. It’s just that we can find situations where the range of light in a scene is far greater or far lower than what our camera’s (and ultimately our prints) can handle. That doesn’t change how we get exposure right for our intended subject within the scene.

    I think that you and I are not too far off in our appreciation for what can happen in practical photographic situations. However, I will continue to claim that the Huygens’ principle, as applied to any practical subject with finite dimensions relative to our camera’s objective lens, implies that it is effectively a plane aperture (as shown in that Wikipedia article); and therefore, there is no inverse square law fall off between the subject and the camera’s sensor, until the camera is so far away that the focused subject subtends only one pixel on the sensor (or to be more accurate, is effectively a 2-D impulse to the camera’s optics).

    I hope that I am making myself clear here. If not, I can try to diagram the situation and demonstrate the physics.

Leave a Reply