How light falls off

In anything that I’ve read about photography, one is instructed in how light falls off. This is important to know of course. Unfortunately, I also have degrees in physics and electrical engineering, and I already know about light. What the photographers are usually taught is insufficient at best, and dead wrong at worst.

The inverse square law

The truth that photographers are taught is the inverse square law. I am not trying to say that this is incorrect; it’s just not the whole truth. My guess is that any photographer that ever thought about how they meter scenes would have begun to wonder about its validity as well.

The inverse square law applies to a very specific model. That model involves a point source of light in empty space. To every photographer reading this, I ask you, how many times have you shot this subject?

The essence of the idea comes down to how area works. This will be important in what follows, so it makes some sense to think about this clearly. So, I have a point source of light, and it’s floating in empty space. Say that I am a distance, D away from it. I guess I’m floating in empty space too; neat gig for photography, yes? I set up my camera and meter the light from the point. I get something. Just to be specific, say I get 1/1000s f/8 at ISO 100. Now I float back to twice as far away: 2D. What do I meter now? The answer is, just like in the photography manuals, 4 times less or 2 stops; for example, 1/1000s f/8 ISO 400.

Here is why. The light from the point source is going out in all directions uniformly. When I am placed at a distance D away from the source, only a small fraction of the total light reaches the area of my lens and falls on my camera’s sensor. Say I want to work out that fraction. I could work out the area of my camera lens. I’d measure it with a ruler, get the radius, r and then figure out \pi r^2. The question is, what do I take this area to be a fraction of? The answer is, a sphere of radius D.

The speed of light

Why is that? Well, the answer is thanks to Albert Einstein, if you must ask. He showed us all how the speed of light is a constant, c. Whatever light can reach my camera lens was actually sent from that source some time ago: D/c. The other light from the source that was sent out at the same time as what I’m capturing with my camera must, by all the laws of physics, be the same distance away from the source as where I am; that is, D. If I go ahead and take a cord of length D, hook one end to the light source, and go floating around in my space suit, I’m going to be on some imaginary sphere out in space. Why? Because that’s how you make a sphere: you pick a point and get all the other points a fixed distance away from it in three dimensions.

A point source of light

A point source of light

If you know the formula for the surface area of a sphere (it’s 4/3 \pi D^2), that’s cool. If you don’t, all you really have to remember is that the area of this sphere will depend on the square of its radius. So, the ratio of the light I can get into my lens to the total light from the point is \text{Intensity}(\pi r^2)/(4/3 \pi D^2) = 3/4 \text{(Intensity)} r^2/D^2, for what that’s worth. If you don’t like maths, don’t worry. Just notice that the fraction depends on the square of the distance that I’m from the point. The \text{Intensity} is just some number expressing how intense the light is. It doesn’t matter what this number is; it can be more or less and what I’m working out here is just the same.

If my job as a photographer was capturing images of points of light floating in space, this would be a useful formula to know. You betcha! I make fun, there are some astronomers who work out this kind of thing; but for most of us, this is a rare case.

Anyway, back to the theme… points of light. Now I float back to twice as far, remember? Now the light that hits my lens was sent out actually at a time of 2D/c before I hit the shutter button. All of the brother & sister photons that get caught by my camera were sent out at the same time; and they are now on a sphere of radius 2D. The size of my lens hasn’t changed; I didn’t swap it out. What I got before was

 \text{Light captured at D} = 3/4 \text{(Intensity)} r^2/D^2

And now I have

 \text{Light captured at 2D} = 3/4 \text{(Intensity)} r^2/(2D)^2

so the ratio is

 \frac{\text{Light captured at D}}{\text{Light captured at 2D}} = \frac{(2D)^2}{D^2} = 4

Lot’s of mental anguish for one little number, right? Everything except the square of the ratio of the distances cancels out and, in this case, we get 4. If I’d put some other number in instead of 2, say, x, the answer would have been x^2.

So this is the inverse square law. If we have a case where the source of light can be modeled as a point, this model makes sense. To be honest, it does make some sense in the context of how light falls off from a source like an unmodified light bulb or flash bulb. So long as we are far enough away from the light source that its physical size is much smaller than the distance, then this formula works.

A cylinder in space

Now, we move on to another hypothetical situation. Forgive me if this seems artificial. I am heading toward a more practical and universal case, but these stops along the way are important. We have just considered a point source in empty space. Now we consider a long line source. Again, I’m a photographer floating in space some distance D away from a line of light. I’m going to use the same idea as I did with the point source. My camera lens has a certain area. The light from the line source takes a certain time to reach my lens, D/c. All the brothers and sisters of the photons that my camera captures left the line at the same time, D/c before. Now, these photons are all on a cylinder of radius D. The area of the cylinder is \pi D L where L is the length of the line.

Line of light

Line of light

We can work out the ratio of the area of my camera’s lens to the area of the cylinder; but the problem is really a lot simpler. All we really need to think about is how the area of the cylinder changes when I move further away or close. That area depends not on the square of the distance, as in the case of the point source, but just on the distance itself. If I move twice as far away, to 2D now the area of the cylinder doubles. This is because the area of a cylinder depends directly on the distance, D. So the light falls by one stop as I double my distance.

If you want, or can, visualize this, think about the line being comprised of many adjacent points. Each point is sending out its sphere of light, just like in the previous case. Imagine when all of those spheres get to a radius, D. What shape do they all make when taken together? The answer is a cylinder.

If you want a practical example, consider the light from a fluorescent tube or one of those linear soft boxes used in studio settings. Photographers sometimes talk about the “quality of light” from different sources. In objective terms, they are talking about how light falls off from the light source, how it shapes itself to the subject. We will see how this works more deeply in the next case we consider.

A plane of light

Imagine now a plane of light. It stretches out indefinitely in two directions, up/down & left/right, if you will. I am now a photographer floating in space a distance D away from this plane. My lens has its same old area. I am again capturing light from the plane in some ratio that depends on the area occupied by the light that left the plane a time D/c before I hit the shutter button. What is that area? In this case, it is just the area of the plane. What happens if I float back to twice as far, 2D? Nothing. The area occupied by the light from the plane is just the same. The amount of light captured by my camera doesn’t change at all. Why is this? Because the area of the plane of light at my camera is the same everywhere. It doesn’t drop off at all as I move away from the surface of light.

Think about the plane being comprised of many many points of light. Each one sends out its sphere of light. Imagine this plane coated with these little bubble-like hemispheres of expanding light. Let the spheres overlap each other as densely as the points of the plane. What kind of surface do you get? A plane. Does its area expand as you move away? No. Well, maybe it does at the edges; but in this model, we are floating near the middle of the plane out in space and we can’t even see the edges of it. As far as we are concerned floating there in space, there is no fall off of light at all. If we get the right exposure for the plane, we can float way back or right up; we still have the right exposure.

So while it’s true that the light from any point on the plane falls off as described by the inverse square law, the light from all of the points taken together is a constant. Imagine this light falling on your retina or your camera’s sensor. As you move away, any given patch of light becomes smaller, but light from other patches fills in to compensate exactly. The balance is precise. The intensity of light on your retina or your camera’s sensor remains constant.

Now we’re getting somewhere

Perhaps you’ve done this yourself. Perhaps you’ve seen it done. A photographer or lighting technician stands in front of a subject and takes a reading with an incident light meter. Perhaps he or she holds the meter right up to the subject’s face. Perhaps he or she stands a bit in front of the subject. The meter measures the total incident light, multiplies by 18%, works out an exposure given some constraints like ISO and aperture, and that’s what the photographer goes with.

That’s what the photographer goes with even if he or she is standing 10 feet or 20 feet back from the subject. Why does this work? If light fell off like the inverse square law, the photographer would have to be adjusting exposures higher by a couple of stops for every time his or her camera distance doubled from where the light meter had been placed. But no one has to do that.

It’s the same idea if one puts an actual 18% gray card in the scene and meters off that. The camera can be moved all over relative to where the card had been placed without any adjustments for proper exposure.

Why is this? Because the scene is an assortment of surfaces. Sure, they have different colors and reflectances and sizes and shapes. However, they behave like that imaginary plane in space. Now, it’s just that the plane has some detail on it. So long as the average is right, the exposure is right.

Of course, there are limits to this idea. If I have some studio lights set up for an outdoor photo shoot at night and I get so far back that my subjects look like a campfire in the distance when I site through my lens, I have exceeded the limits of this model of a plane of light. But that’s so obvious it’s hardly worth mentioning.

On the other hand, if I’m doing landscape photography out in the open where the sun is the source of light, I can move around a scene a lot and not have to readjust exposure. That is, so long as my movements aren’t bringing in some new element with very different lighting characteristics, I can move around a lot. If the average remains right, the exposure remains right.

An intellectual game

So, suppose I’m out shooting landscapes. I have set an 18% gray card by some beautiful flower. On the other side, I have set a mirror. The sun is behind my back. I have set the mirror up so that it is reflecting the sun directly at my camera. I set my exposure based on the light I’m getting from the gray card. Based on all that I’ve said so far, I should get a beautiful, if surreal, shot, right?

If you agreed with the last statement, we need to do some more work. Let’s think very clearly about this. The sun is a distant source, which means that we have to move quite a distance before its position in the sky changes. That means moving an appreciable part of a time zone, going due east or west. A photographic image that could capture so large a scene would have to be taken from some very high altitude, and that certainly does not apply to my little imaginary scene. So we can assume that the sun is a distant source and all of its light is coming toward the scene at a fixed angle.

Sunlight

A little research about sunlight tells us that there are roughly 445 Watts per square meter of visible light falling on the surface of the Earth at sea level when the sun is directly overhead. In my intellectual game, I want the sun behind me in the mid-afternoon, call it 3pm. For the sake of argument, imagine that I’ve got my gray card set up so that it is facing the sun at just the right angle. The mirror will be set up at a slightly different angle so that I’m getting the disk of the sun reflected right at me.

Of course, the intensity of the light from the reflection of the sun in the mirror is going to be much much brighter than that from the gray card. Whatever that 18% of maximum from the gray card means, we see right away that the direct light of the sun is more than 100% relative to the gray card; that is, it’s much much greater than 100/18 = 5.555 times as much. These are the magic 2.47 stops that take us from Value V to the edge of Value VIII.

Diffuse surfaces scatter

Say that my gray card is big; it’s 1 square meter. I have this big gray card to make the arithmetic easy. So it’s going to scatter back 18% of 445 Watts of visible light. That’s 80 Watts. But what about the mirror? Obviously, it depends on the angle that you look at it. At most viewing angles, you’ll see the sky or the ground or whatever. Only at a very specific angle will you see the sun. This is different from the gray card. You can see its uniform gray surface from any viewing angle at all. That means that as far is the surface of the gray card is concerned, it’s scattering its 80 Watts of light energy into the entire hemisphere that is in front of it.

Also, if I make the gray card bigger or smaller, the amount of light it’s scattering will change in proportion. If it’s 1 square meter, I’ll get 80 Watts into all angles. If it’s 2 square meters, I’ll get 160 Watts, and so on. What about the mirror? I won’t get any more or less sunlight back in that direct ray if I make the mirror larger or smaller, so long as it remains large enough for me to see the full disk of the sun reflected from where I stand.

Mirrored surfaces reflect

These two surfaces are behaving in very different ways. Let’s try to equalize them. Instead of having a perfect mirror, let’s make an old dingy mirror. In fact, it’s so old and dingy that it reflects only 18% of the light back. Even in this case, the light from the dingy 18% mirror will outshine the 18% gray card. Why?

Let’s destroy a camera with the sun!

Let’s try an imaginary game that we would never do in practice. We’ll think about the image of the disk of the sun as it is focused on our camera’s sensor. It will have a certain area on the surface of the sensor. In fact, we could cut out a small circle of some given size, paste it onto the sensor, and block out the sun completely. Indeed, there will be some focal length on our lens so that the disk of the sun would just about fill the area of the sensor with its corners dark.

To work this out, we need a mathematical formula or two. First, we want to know what the angle that the disk of the sun falls across at the surface of the Earth. Assuming that it’s 94 million miles away and 900,000 miles in diameter, that comes out to be around 0.55 degrees. Next, we need a formula for the viewing angle of a lens. We can get that from Wikipedia. The formula is

 f = \frac{d}{2 tan(2 \alpha)}

where f is the focal length, d is the dimension of the sensor, and \alpha is the viewing angle we want. To just fill the vertical dimension of a 35mm camera, we want d = 24 mm. We just got the angle \alpha as 0.55 degrees. The answer is that the focal length necessary to just fill the vertical dimension of our 35mm camera is 2508 mm. That’s one whopping lens!

Kaboom!

Now, we can work out some basic dimensions for the aperture of this lens. At f/1, its diameter is just the focal length, 2508 mm. That means we can work out the area of this whopping lens: it’s going to be \pi (2.508/2)^2 in square meters. Ladies and gentlemen of the jury, this means that the area of our fill-the-sensor-with-the-sun lens is almost 5 square meters. With this insane setup, we would be focussing 5 \times 445 = 2225 Watts of visible light on our poor, and now molten, DSLR sensor!

This isn’t saying anything much that we don’t already know: don’t look right at the sun! It will burn your retina. Likewise, don’t point your camera at the sun: it will burn your sensor. We just quantified the problem.

Back to sanity

Of course, no one would be insane enough to try this in the real world; and I strongly recommend that you don’t either. Instant destruction of a valuable camera will certainly ensue. But we have a reference point. Suppose we go to a 50mm lens at f/8. Its aperture will be just a circle with a diameter of 6.25 mm. Its area relative to our sensor-killer lens will be (6.25/2508)^2 which works out to be about 6.2 parts per million. The total power from the sun goes way down to about 0.014 Watts. Yeah.

This power is focussed onto a much smaller part of the sensor as well because our 50mm lens has a much smaller magnification factor. It doesn’t take too much arithmetic to figure out that the size of the diameter of the disk of the sun on the sensor is now reduced from 24mm to (80/2508) \times 24 = 0.77 mm. The power density is now 0.03 Watts per square millimeter. If we knew the number of pixels on the sensor, we could work out the power per pixel. Take my D700 for example. It has 12 million pixels. The area of the sensor is 36 \times 24 = 864 square millimeters. A pixel has an area of 0.000072 square millimeters. Very small. We get about 2.2 microWatts per pixel. Not so much, right.

What about what we’re getting from our gray card. Recall that it was kicking off 80 Watts into all angles from its surface. Say that we are 3 meters (10 feet) away from it. We have that 50mm lens at f/8. We have a 1 meter object at 3 meters away with a 50mm lens. The real image on our sensor will be about 17 millimeters on a side. (I worked this out from the basic lens formula.) Of the 80 Watts bouncing off this gray card, how much can our lens capture? At 3 meters distance, that 80 Watts is distributed over a hemisphere with an area of 2 \pi 3^2 = 56.5 square meters. The aperture of my lens at f/8 gives me an area of \pi (6.25/2)^2 = 30.7 square millimeters. So, I get a total of 80 \times 30.7 / (56,500,000) = 43.5 microWatts. This is on an area of 17^2 = 289 square millimeters. That comes out to cover about 4,000,000 pixels on a D700. On a per pixel basis, we get just 43.5/4,000,000 = 20 picoWatts per pixel.

A big big number

Now for the big number! The ratio between the pixels illuminated by the direct reflection of the sun and those illuminated by the gray card. Ready. It’s a factor of 2.2 / 0.00002 = 110,000! In good old photographic stops, this is about 16 2/3 stops! Again, we’ve just quantified the difference between looking directly at the sun versus looking at scattered light from objects here on Earth, even with a normal lens.

How did this happen? We started out with a perfectly simple setup. We illuminated a gray card in a landscape with sunlight. The sunlight should be just the 100% value relative to the 18% from the gray card, and lo and behold, it is actually more than 16 stops hotter. Not 2.5 stops; 16 stops! When we thought about using a dingy mirror that reduced the light by a factor of 18% (about 2.5 stops), it did almost no good. We’d need a neutral density filter of over 12 stops to get even close, and then the rest of our scene would vanish into blackness.

What the heck just happened? The reason is extremely simple. If we look straight at the sun, we are capturing all of its light. If we look instead at some diffuse element of a scene (like a gray card), we are getting just a tiny fraction of the fraction of the sunlight it has reflected. Why? Because we’re standing back away from it. This time, the inverse square law is working, even though we are still getting the same average amount of light from the entire surface of all of the elements in the scene. This is really the same case as when we were floating in space some distance away from that plane of light. As we back away from the plane, we get less light from the part we could see before (with the fixed viewing angle of our lens) but as that light drops off, we get more light coming in from parts of the plane we couldn’t get into the lens previously. Imagine for a moment that the plane is now a checkerboard pattern of white and black squares. The average is some gray. If I’m far enough away from the plane that my camera captures many squares, I’ll meter the average of the white and black values. If I move away, I get less light from any given white square; obviously, it’s a smaller element on my sensor; but I make up for this by getting the same area of white on my sensor at any position. Exactly half the sensor is white squares no matter how I move. So, my metering value remains the same no matter where I am. (In physics, this is called a symmetry, for whatever that’s worth to you.)

So long as our scene consists only of diffuse surfaces, the maximum intensity of light will be that of a 100% diffuse surface, which is about 2.5 stops above Value V. It is when we have either specular surfaces or direct sources of light that the fun begins. But this already provides us with a useful fact: if we find a surface that spot meters at more than 2.5 stops above Value V, it is a specular surface or a direct light source. It must be.

Back to Zones

In practical photography, it’s rare to try to capture the direct image of the sun; not impossible, just rare. We can go with a graduated neutral density filter and some HDR techniques and shoot a sunrise; for example. What is much more typical in landscape photography is that a specular surface that isn’t quite a perfect mirror will reflect some significant part of the sun’s direct light back into our camera. In studio photography, we get a similar effect when we catch glare off someone’s eye-glasses or sheen from some metallic object. In event photography, we may capture direct light from stadium lights, street lights, car headlights, whatever.

In every case, the analysis will work out to be very similar to what I just did for a true mirror and the sun relative to a diffuse reference value like an 18% gray card. The ratio may not be quite 16 stops because man-made lighting doesn’t have the intensity of direct sunlight, and real specular surfaces are rarely perfect mirrors, and practical specular surfaces are often not large enough to reflect the entire disk of the sun; but the impact on exposure Values is the same in every case.

That is, it is quite possible to find scenes in which direct light or specular surfaces are much greater than 3 stops above the Value of metered from an 18% diffuse gray card. Heck, it’s not just possible; it’s almost guaranteed.

In landscape work, the classic specular surface is water. Water droplets in clouds, frozen water as snow, the surfaces of streams or lakes, running water on streets, and so on. But there are also metallic surfaces on cars, glossy paint, glass at the right angle, the list goes on. In studio work, similar instances apply.

It becomes essential, then, to accommodate elements in scenes that can significantly exceed Value V; that is, the exposure Value of a properly illuminated 18% diffuse gray card. Unless we’re daft enough to shoot right at the sun, we may not find elements that are 16 stops hotter; but finding elements in scenes that are 5 or 6 stops hotter than middle gray is extremely easy: include a cloud in your shot on a bright day.

We need a strategy for handling this kind of situation, and we still need to get 18% neutral gray right in the final print.

Stay tuned and we’ll see how to manage this.

Previous Page … ••• Next Page …