Photo of the Week #10: UCLA in High Dynamic Range

click image to view larger

I was at UCLA a few weeks ago for a plasma physics winter school, a week long workshop for grad students and post-docs. In the evenings we had homework sessions on the roof of the physics building, and one evening I took several shots of a building across the street. The above photo is a “tonemapped”, high dynamic range (HDR) image compiled from a stack of three bracketed photos with different exposures.

The light was really amazing because the sun was setting to my left as I was taking the photo. However, I knew that no one exposure could capture both the detail in the clouds, and the details in the shadows on the right side of the picture; the scene had too large a dynamic range. So I took three pictures using the auto exposure bracketing  feature in my camera. These pictures (seen below) were all taken with the aperture set at f/8, but the shutter speeds were 1/6, 1/10, and 1/15.

What is Dynamic Range?

Static dynamic range refers to the difference between brightest and darkest things you can see at the same time without moving your eye around. The static dynamic range of the human eye is generally around 100:1.  So, the dimmest thing you can really see when looking at, say, a campfire is about 100 times dimmer than the fire itself. When talking about photography, differences in brightness are typically discussed in terms of “stops.” A stop is a factor of two difference in brightness. So, a ratio of 100:1 corresponds to about 6 and 1/2 stops (26=64 ; 27=128).

Of course the total dynamic range of your eye is MUCH bigger than that. In total, your eye can resolve an impressive 20 stops, or about a 1,000,000:1 ratio of luminosities (brightnesses). That means if you move your eyes around, they can adapt to see a much wider range of luminosities (just not all at the same time).

My camera, however, can only resolve a modest 5 stops in a single scene (stored as an 8-bit per color channel jpeg file; this post really deserves its geek tag, doesn’t it?). Most cameras have a similar limitation. Consequently, when you look at a photo taken with basically any camera (digital or film), and displayed on a typical monitor or on photo paper, the luminosity information has possibly been heavily truncated. This why skies often look white in photos even though they looked blue in person.

What is Tonemapping?

One way to convey more of the luminosity information from the original scene is by combining multiple exposures into an image that contains a wide dynamic range. The luminosity data can then be compressed to a range that can be displayed in a single scene. That compression process is called tonemapping. In the example above, I took 3 photos, each spanning 5 stops and separated by 1.5 stops and combined them to yield a single photo that retains local contrast information in both the highlights and the shadows. Here are the original images:

While the local contrast information has been better retained everywhere in the tonemapped image, the total dynamic range has not been increased and is still limited by the maximum dynamic ranges of the file format and the display device. I used software called Photomatix Pro to do the tonemapping. The free trial version can make images as large as the one above, or larger ones that have a watermark on them.

Advertisements
Explore posts in the same categories: From the Road, Geek, Photo of the Week, Pictures

Tags: , , , , , , ,

You can comment below, or link to this permanent URL from your own site.

3 Comments on “Photo of the Week #10: UCLA in High Dynamic Range”

  1. martinsoler Says:

    Nice colors. IMHO the blurry foreground doesn’t help the image too much. Try to darken it a bit.
    http://martinsoler.com/category/hdr/

  2. Scott Says:

    I’m confused. None of the component images seem to contain any information about the foreground foliage. Is that just a limitation of my computer screen, or is there some weird correlation function being computed somewhere that brings them out (seemingly) of nowhere?

  3. bpatricksullivan Says:

    You know, that’s a good point. None of the individual pictures shows that particular area of detail very well, but it’s a combination of the limitation of the display medium and of the human eye.
    If you open either the top or the bottom of the three component images in GIMP or any program that can “adjust curves”, you can bend the curve in the shadows to get that detail out that part of the image. In the process of doing that though, typically you’ll totally blow out the parts of the image that are brighter, or end of with some weird looking gray patches in some range of brightnesses.
    It does kind of look like magic in the above example though.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: