The "Why's" and "Why Not's" of HDR
(High Dynamic Range)
by Sarah Fox

We're snowed in, here in rural Virginia, an cabin fever brings on many deep, semi-demented musings. This morning, while lying in bed, I was thinking about light, color, evolution, HDR, and the physiology of our visual systems. If this smacks to you of something a sensory-neurobiologist-turned-photographer would think about, then give yourself a little gold star. My thoughts are somewhat long and wandering. If you don't like long and wandering discussions, read no further. Otherwise please bear with me...

When I awoke, I thought about testing some of the things I had put off testing. One of these was to test certain methods of white balance with regard to clipping. This led me to think about daylight WB and how the daylight spectrum is relatively/somewhat flat. Wow, what a coincidence that the spectrum happens to be flat-ish in the color range that we see! That means the sun is at just the right temperature for us. Isn't nature amazing?! I'm being facetious, of course. What really happened is that our visual system evolved to take advantage of the spectrum of light available to us. (You creationists can say our eyes were designed to match the spectrum. Same difference. I'm not picking an evolution/creation argument here.)

"But why don't we see in UV?" There's useful information there. That's plain to any bee or butterfly. There's also useful infrared information. Just ask any snake! Also, although we are sensitive to the blue end of the spectrum, why do we have so few blue cones that we don't see much detail in blue. (Ever try to read deep blue text on a black background? We don't have any trouble with red or with green, our favorite middle-of-the-spectrum color.) Why is it that our visual systems are so heavily invested in teasing out details in the red/yellow/green part of the spectrum? Scientists are largely in agreement that our particular spectral sensitivities are a good match for our evolutionary history. Early humans and humanoids ate a variety of foods, but fruits and vegetables were right up there on the list. (We modern-day humans could learn something from our ancestors.) Of course we all know that fruits and vegetables are color coded for ripeness and freshness. For instance, a green banana isn't ready, a yellow banana is just right, and a black banana is expired. Green and fuzzy is beyond expired. We're very nicely evolved/designed to be able to look up into a tree (or in the deep recesses of a refrigerator) and instantly spot all the food that is ready to eat.

So why is it important to see blue? Here's where creationists will have to forgive me: Our extended family is made up largely of tree dwellers. When jumping from branch to branch, it's really helpful to be able to distinguish hard, woody branches (brown) from soft, newly formed branches (green), from dead branches (brown without bark), from leaves (green), from sky (blue). Everything that is brown tree stands out easily from blue sky. Most of what's safe to grab is brown.

So that is what we NEED to see. Why don't we see more than we do? We don't see in IR because we don't need to hunt little animals in the dark. We don't see in UV, because we don't need visual aids that would allow us to make a beautiful landing on the airstrip of an orchid. We don't see in four or more colors like many/most birds, because we don't need that much color information. (We're not doing the really hard things like trying to distinguish one brown seed from another brown seed -- which might be two entirely different colors to a bird.)

We also don't see in the dark very well. It's dangerous when it's dark, and we daytime creatures are safest in our beds at night, while the night predators are on the prowl. They need good night vision; we don't. They don't need color vision to distinguish between a ripe fruit and an over-ripe one, because they don't eat fruits. Therefore they can pack their retinas with smaller, more sensitive photoreceptors called rods, instead of our space-hogging, color-sensitive cones. (That's an oversimplification, by the way.) Just like us, they see what they need to see, and they're not invested in hardware that would let them see things that are interesting, but useless.

Those readers who haven't fallen asleep or moved on might see a theme here: We see what we need to see. We don't see what is useless to us. Why is that? It's that our visual systems are HUGE and complicated and require a huge investment in neural machinery. In fact we burn up a whole lot of calories just running our visual systems. The whole back end of the human brain is used for processing nothing but visual information! So if we concerned ourselves with IR and UV light or with night vision or with four or more photopigment color vision, we'd either have to have even larger brains or figure out what other visual capabilities to cut back on. (Thinking like a rat, maybe eating a rotten banana isn't so bad after all!)

What does ANY of this have to do with HDR? I'm getting to that. Be patient...

There is a lot of information before our eyes in almost any environment, and we have to sift through it for the information that is useful and meaningful to us. Our visual systems are not like cameras, but rather like computers that use light information to decipher the world before us. Much of what our visual system does is to sift relevant information from irrelevant information, because we have a hard enough time just keeping up with the relevant information! (This would be another very long discussion by itself, but it would honestly make the average person's brain melt without the benefit of a one semester course.) Just like we don't want to be confused by seeing colors we don't need, we don't want to be confused with extraneous visual information of any sort.

Where does HDR come in? Well, our eyes auto-expose to some extent. Most people attribute this to variable aperture (the iris constricting or dilating), but it's really more a matter of photopigment bleaching. Avoiding the particulars of that, it suffices to say that light hitting the retina in one spot will make that patch of the retina less sensitive. Moreover, there's a process called lateral inhibition, whereby global brightness becomes less important than local contrast. Again, this would be at least a semester-long discussion. It suffices to say that our visual machinery culls out just what it needs to decipher the environment and doesn't really "want" any other information. Furthermore (and here's where HDR almost comes in), these processes are much like the "local contrast enhancement" that is done in the tone mapping process of HDR. So there is nothing inherently "alien" about the tone mapping process.

Of course those who know HDR know that tone mapping is simply a second stage process used for representing an HDR image in a lower dynamic range medium. So what of HDR itself? If there is an obvious use to HDR techniques, it is to create an image that represents all of the dynamic range we see before us. That's a tall order As I look out over the snowy lawns and rooftops to see deeply shaded cedar trees, I realize there is no way I can capture both the highlight and shadow detail in a single frame on my camera -- not even close. Of course there are ways to do this with multiple digital frames at different exposures, and there are also ways to do this with a very low contrast film negative. However, HDR, by itself, is a useful way to combine information from multiple digital exposures into a single image file for whatever use it may serve..

Unfortunately the sad fact is that there's only so much dynamic range we can put on paper. There's even more that we can potentially generate from a monitor, but we still have a lot of advancements in technology to get us to the point where we can accurately represent what I'm viewing out my window. If we can eventually create monitors up to the task (maybe LED monitors of the future?), then HDR (without tone mapping) will be a godsend and will give us the ability to recreate a scene much like what we originally viewed with our eyes. Until then, we're stuck with lower dynamic range media.

Where HDR gets into trouble is in the tone mapping process, which seeks to pack the expansive dynamic range of the HDR image into the limited dynamic range of, say, a paper print. Tone mapping, a.k.a. local contrast enhancement, is a process not unlike local photopigment bleaching and lateral inhibition, so it's not entirely "unnatural." What is unnatural is that it's done twice. The image is first digested and preprocessed via tone mapping, and then the digestion products are passed to the visual system for further processing. This often results in what I would call an overload of detail. Remember, our computational resources are limited in our already bloated brains, and we really don't want extraneous information. When we are bludgeoned with detail from every little stick or pebble in a scene, it's easy for one to get totally overwhelmed, so that one doesn't really "see" (or at least focus on) the more important aspects of the photograph -- well, at least speaking for myself.

This is really a marvelous age, in which we can start to ask ourselves questions about whether new technologies like HDR and tone mapping SHOULD be used, and if so, to what degree. However, the question of dynamic range and its limitations is hardly new. In fact besides being a splendid nature photographer, Ansel Adams was one of the pioneers in this area. Through pull processing of his film, Adams found he could create a low contrast printed image that actually would represent an expanded dynamic range. However, it might also result in a rather dull, unappealing photograph. Rather than to try to pack his images with some representation of all the dynamic range he could find, Adams' approach was to identify all the information that he really wanted in the image and to capture only that information in the negative. This deliberate process was very much driven by the concept that unimportant information eats limited resources, to the detriment of important information. (Remember, that's the underlying design philosophy of the visual system.)

Drawing all this into a big pile, I would say there is probably some benefit to expansion of dynamic range in the final print. I say this only for one reason: As I look out my window, I realize there is no way I can show what I'm seeing in a conventional print, from the subtle contours of the snowy rooftops to the textures of the dark, shaded cedar foliage. We often look at our final images and conclude that they have somehow come up short, often in the form of a blown out sky. I know it's usually an exercise in frustration to photograph my black cat in any environment that isn't equally black. So sometimes, not always, I'm in want of greater dynamic range. HDR and tone mapping are one approach to solving this problem. Yet there are others.

One approach is to reduce the overall dynamic range in the frame. This is often done in landscape photography with graduated neutral density filters, which gradually darken the frame above the horizon. This approach often yields very good results. Likewise, a polarization filter or a yellow filter (in B&W photography) can be used to darken the sky. Another useful approach is to lighten the subject with respect to a bright background using electronic flashes. The Strobist blog is a must-read for this approach. These approaches don't always produce "natural" looking results, but they can produce pleasing results nonetheless.

Another approach is to judiciously blend two or more differing exposures of the same image, so as to bring out detail in the shadow and/or highlight areas of the image as needed. Because modern digital cameras capture more detail than can be represented in print, one can blend two different contrasts of the same frame, provided the exposure is spot-on. Better still, two different frames can be captured at different exposures and blended in postprocessing. They can either be combined through HDR or not. (It doesn't matter.)

The advantage of any of these alternative approaches over HDR is that they do not produce the same image flaws that HDR and tone mapping do, namely an overall "gray" appearance, with bright halos around dark objects. When blending different exposures, shadow and highlight detail can be revealed with tightly constructed masks positioned right up against major high contrast borders, such as a horizons. These approaches require considerable skill, but they pay off well for those with the patience to use them.

In the end, with all the advancements that digital photography and postprocessing has laid at our feet, we photographers must make some important decisions: What do we include in a photo, and what do we not include? Here I will state my belief that we should not pack everything into a photo that we can, even if we are technically able. Why? Well, this all harkens back to that long discussion at the beginning of this article. We as biological creatures are limited as to what we can process, even with our expansive visual systems. The design and implementation of our visual systems emphasizes what is necessary and important and diminishes what is not. It is judicious in this selection process because it has to be. Even Ansel Adams, who burnt the midnight oil refining how he captured dynamic range, decided that there were some elements that should be included in the dynamic range, and some elements that should not.

The concept that some information belongs and some doesn't runs throughout photography. Judicious photographers will often take great pains to remove distracting elements from an image, whether through arrangement of a scene, lighting, or postprocessing. The reason is that they want to direct focus to the elements of the image that they feel are important.

This brings us back to HDR and tone mapping. The global purpose behind this approach to photography is to pack more information into an image than would otherwise be possible. But does that give us too much information? I suppose there might be scenes in which EVERY element is so important that it should feature equally prominently -- every pebble, every stick, every hair, and every whisp of a cloud. However, I don't recall ever seeing such a thing. If I did, I think it might give me a panic attack. To me, some elements are more important, and some are not. To me a photograph should be somewhat like a poem. It should dwell on the important elements and not address the extraneous elements. Shakespeare wrote, "Hark, what light through yonder window breaks. It is the east, and Juliet is the sun." He didn't write, "The time is 6:32 AM. Juliet Capulet, a caucasian female, age 13, has just peered out her bedroom through an arched stone window of dimensions 2.4 ft by 4.6 ft. She is unaccompanied. She is wearing a woolen, cream colored dressing gown that is tied around her neck, apparently with a common bow knot. The direction of her gaze is approximately 0 degrees in elevation and 265 degrees in azimuth, apparently towards a large oak tree of 126 ft distance -- to the trunk." Those details would have been irrelevant and indeed would have lost the point of one of the most beautiful scenes of his play.

In my estimation, an image with heavy HDR and tone mapping is much like the investigative report of Romeo Montague, P.I. It might be very useful for photographing a crime scene, but it's not really something I want to look at. Rather than have the photographer lay out each and every detail of a scene to me, I would rather know what that photographer is trying to say -- what that photographer feels is important about the scene. Sometimes a photographer will lead me visually through a short series of little details to get to the main point. I usually smile when that happens, because I know I've been skillfully taken by the hand to go and see something. There is much to be said for simplicity. That is the nature of my limited visual processing capabilities. If you throw enough needless details at me, I get overwhelmed and lose the point.

HDR and tone mapping may well have their place in photography at this stage in our technology, but I would suggest their use should be limited -- VERY limited -- if at all. They might be useful for clarifying technical images (e.g. photographs of machinery) or for generating real estate photos (in which most details are indeed important). They might also be an easily accessible method for novices that yield generally acceptable, albeit suboptimal, results. In fact I can see a day when tone mapping is used in-camera for compression of wider dynamic ranges into jpeg format, making exposures considerably more forgiving for casual shooters. However, for those with more advanced skills, I feel the alternatives to HDR achieve much better results, albeit with much greater effort. A good friend pointed out to me, in defense of HDR, that not all HDR work is garish and overdetailed; some of it is done very subtly and tastefully. I do agree with his point. However, I still believe anything that can be accomplished with subtly applied HDR can be achieved much better with alternative methods, particularly with layering and blending of differently exposed or contrasted layers.

I realize there are strong proponents of HDR/tone mapping approaches who might be reading this. I apologize if what I say offends them. Perhaps I should simply say HDR is not my cup of tea. I feel it is a fad that will soon find company with other digital manipulations like brush stroke conversions. I think it is interesting to observe the daily pictures on PhotoNet, a popular website for photographers. The daily photographs are culled from people's portfolios on the basis of their popularity to the PhotoNet community. For a short while, every second or third image was an HDR. I now see the HDR images declining in presence in that daily lot, indicating that their popularity is waning somewhat amongst photographers. I can only imagine that it's because these images leave us feeling a bit visually overwhelmed with unimportant details. I can certainly understand the initial appeal of HDR imaging. There's a certain "wow" factor when an otherwise dull image is cranked through the HDR process, coming out the other end with "punch." I can understand what it's like to gaze in awe at all the hidden details the process can pull out of a picture. But in the end, I have to ask myself whether I really want all that information there.

It has taken me some time to figure out why I don't like HDR (There, I said it), beyond the obvious artifacts, that is. I feel I have figured out something out about photography in this process, or at least I have refined my philosophical approach to photography. As such, the HDR phenomenon has been very useful to me. I realize my diffuse ramblings may have seemed excessive, but I think they paint the context behind the things I have finally come to realize, namely that there is such a thing as too much information. I hope others have benefitted from these points of view that I have shared.



Links: Home Galleries About Us Photoediting Services On-Location Services Portraiture Architectural Photography Commercial Photography Special Events Web Design Articles Projects FAQ Contact Site Map Notice: All images and web content are copyrighted by Sarah Fox, Earline Thomas, and/or Graphic Fusion, will all rights reserved.
Printing or distribution of this material is prohibited.