Note: You can also download this tutorial as a in case you prefer to print or read it offline.
In this tutorial I will introduce you to the fascinating world of HDR photography and will show you how to create very realistic HDR images. Throughout the tutorial I will try to answer following questions:
I will also talk about typical problems related to this kind of photography and I will propose solutions on getting rid of them. I will also try to share my own ideas about work-flow which I use in my daily post-processing of HDR images. It means that I will write mostly about realistic & natural-looking HDR photos.
Moreover I’ll share a bunch of sample HDR photos to get you inspired (and hopefully to keep you interested as you read this tutorial as it’s quite long).This is a very good example of HDR photo. In this case without using HDR I would end up with either too bright sky or too dark water and trees. By using HDR I was able to make an image that is properly exposed everywhere.
WHAT IS HDR?
Many people have a wrong idea about what HDR photography really is. HDR, what stands for High Dynamic Range (what means nothing more but wide range of luminosity and contrast in the scene), is neither a special effect nor a post-processing technique. Remember that.
HDR is trying to solve a following problem. The real-life scene can have contrast of 100.000:1 or even much higher. This ratio tells us difference between the brightest (e.g. the Sun) and the darkest (e.g. deep shadow of a tree) point of that scene. Sometimes the contrast is so high that even our own eyes aren’t capable of seeing it all and we perceive parts of the scene as very dark or very bright despite the fact they definitely contain some details. HDR solves this problem by requiring you to take several photos, each exposed for different part of the scene in such a way that you have frames that are exposed for shadows, highlights and midtones. These photos are then merged in HDR software to create image that has correct exposure across whole frame – no overexposed and no underexposed areas.
So basically HDR is about reality, it is about light. HDR is what we see every day but what isn’t captured properly by our cameras. HDR we use in photography is sort of a trick to overcome limitations of current generations of cameras and display devices. And it isn’t the only solution used by photographers to overcome these limitations. Some photographers use graduated density filters (filters that are darker on top what makes them stop more light and allow to create balanced exposures), some blend images manually in Photoshop. Each of these solutions can work great, I just prefer to use HDR.
Maybe you have heard that you should use only manual blending or graduated filters because HDR isn’t good as it is artificial, grungy, surreal or simply not pro-enough. Or maybe you were told that with the newest cameras HDR isn’t needed anymore. None of those opinions are true, really. Grungy and surreal look is just a mistake of photographer creating such photos – in this tutorial I will show you how to create natural looking HDR images with ease. Some people aren’t even aware that my photos are HDR! About the other opinion… well, as you might have heard new cameras have nowadays dynamic range of 14 – 15 EV (apart from Canon which lags behind). Quite a lot. Now, the problem is a lot of situations like e.g. sunsets or sunrises, shooting interiors can have much larger dynamic range like 18 to 22 stops. Or even more!
HDR (High Dynamic Range) photo has much more information about luminosity than a Low Dynamic Range photo (like a single JPG or TIFF). Luminosity is a characteristic we relate to light, not colour. It does have nothing with colour temperature or saturation. That’s why I said HDR is not a special effect. Light is something that surrounds us. Dealing with it can’t be thought as special effect.This HDR photo was taken in Warsaw, Poland by capturing 7 exposures at 1.0 EV spacing.
Now it’s time for some examples.
Just move from a very dark room outside where the sunlight is very strong. At first everything is almost white and faded then colours become to look normal but look back and everything will be dark, almost black. It’s because our eyes have dynamic range of only about 10.000:1 meaning that we can’t see details in very dark shadows and very bright highlights at the same time. Please note that sample with our eyes is huge simplification because they have great ability to adapt to available light.One of the most impressive HDRs are ones taken at night. In this case I took 3 exposures, 2 EV apart.
Another example might be a forest with some beautiful light and shadows play, with a lot of dark places and light shafts going through the leaves. Your camera will fail in capturing details both in highlights and shadows – you will end up with an image that has some areas overexposed while others – underexposed.
Or yet another example – a cave. You can try shooting outdoors from inside. In both cases our camera fails – it cannot recognize enough details both in highlights and shadows no matter what its dynamic range is (our cameras have much worse dynamic range than our eyes in fact). And even if it could there is no display device capable of properly displaying such a photo. Who knows, maybe one day it will be possible but not yet. Regarding camera’s dynamic range, eg. my Canon 5D MK III has dynamic range of about 2.000:1… not really good.
Take a look at below photo to understand better what I’m talking about here.
Left part was exposed for the sky – you can see some really beautiful clouds there but shadows in the forest are very dark, almost black. I could brighten them up but they would contain a lot of noise. Way too much noise to be useful.
On the other hand image on the right was exposed for water and forest, and they look nice this time. However, the drawback is that those beautiful clouds are completely blown out – almost white. Darkening the sky wouldn’t help as there are no information in the highlights at all – it was lost at the time photo was taken.Comparison of -3 EV and +2 EV exposures
Without HDR I would end up with an image that is correctly exposed either for the sky or water and forest. Without HDR I wouldn’t be able to get correct exposure across whole frame. Without HDR I wouldn’t capture the beautiful image from the very beginning of this tutorial (yes, it was created from above images!). What’s more using gradual density filter is not the best option here as there are big irregularities on the boundary between sky and horizon.
A word about tone-mapping
Based on the above, I would say that HDR is in fact a trick, something allowing us to overcome limitations of current devices. It uses photo with much wider luminosity range and it maps it back to the space which is possible to be displayed on our monitors. This mapping (known as tone-mapping) is necessary because it isn’t possible to display real HDR photo on a typical monitor. Primary purpose of tone-mapping is therefore limiting luminosity of HDR image so it fits in the range that monitor is capable of displaying correctly. Tone-mapped image IS NOT HDR image anymore. It becomes LDR image (Low Dynamic Range). It means that using a term HDR photos for images that were tone-mapped isn’t correct even though it’s widely used.
That said what you should primarily use tone-mapping for is making sure that details both in highlights and shadows are correctly preserved. You don’t need to care about colour temperature or saturation at this stage that much (although you should correct them, were they wrong). Also there are virtually infinite ways of tone-mapping a photo as you may guess (as there is infinite number of functions mapping from the wide-range to the low-range) but all algorithms (known as operators) fall into one of the two categories:
- Local operators (i.e. in a small neighbourhood of a pixel) – they are working on the local features of the image. It means that tone-mapping might work differently for each pixel of the image depending on characteristics of its surrounding. Local operators are commonly used in HDR software because they produce more appealing images with details and micro-contrast being well enhanced. However, local tone-mapping operators have a few drawbacks. First of all they can amplify noise in the image as software cannot always determine if something is just noise or very small detail so it might treat it as detail. When small details are enhanced, so is the noise. Tone-mapping is no exception here. Many sharpening tools must deal with the same issue. Another issue with local tone-mapping operators is they can produce halo artifacts around the edges between regions with different luminosity values.
- Global operators – each pixel is tone-mapped in the same way based on some global image characteristics (like e.g. luminosity). As you may have guessed this makes these kind of methods really fast (that’s one of the reasons they are used in video games more commonly than local operators) but there might be some loss of detail. The greater the dynamic range of the source image the greater loss of detail is possible.
As mentioned above the main advantage of the global tone-mapping operators is their speed. It is enough to say that global operators are much more frequently used in real-time scenarios (like video games) but local operators produce much more appealing results as they enhance details and contrast locally taking more characteristics into account. That’s why we, photographers, use them more commonly than global ones.This high dynamic range photo was taken in Madrid during blue hour. I used 7 exposures at 1 EV spacing and tone-mapped the image using Contrast Optimizer described in this tutorial.
TAKING A HDR PHOTO
I mentioned that today’s cameras aren’t capable of capturing real-life scene’s dynamic range so the question is how to take a HDR photo?
We have to use a simple trick here. Instead of taking a single exposure with very limited dynamic range we take 2, 3, 5 or more, each of them differently exposed with some photos darker and some brighter than “correct exposure“. By correct exposure I mean a photo that you would use, if you would decide to take traditional non-HDR photo. Then these photos (often referred to as bracketed photos or bracketed sequence) would be merged into one 32-bits/channel image in the merge to HDR process. The resulting photo will have depth of 96 bits (32-bits per channel) so it has much more data about scene luminosity than any of the source images!
Taking HDR photos with auto-bracketing
The easiest way to capture several images with differentiating exposures is to use auto-bracketing feature of your camera – in auto-bracketing mode camera will take several photos automatically, each with different exposure. You control that difference by setting EV spacing in your camera. Please note that auto-bracketing feature is done differently from camera to camera so I suggest you take a look at your camera manual to set it up.
Auto-bracketing (sometimes referred to as AEB – Auto Exposure Bracketing) is a feature, where the camera will take several successive shots each with different exposure but same focal length and focus point. Historically it was used to ensure that at least one of the photos will have correct exposure but nowadays it’s most commonly used for taking HDR photos.
Although auto-bracketing was invented for other purpose (namely increasing the chance of taking correctly exposed photo in difficult light situations) it works fantastic with HDR photography. I also use burst mode which slightly decreases time between shots in the sequence and is also very important in case of shooting handheld (I will tell more about it in a second).
I start taking bracketed photos by finding the right exposure for the middle photo, i.e. the one I would use if I wouldn’t take HDR photo. It is especially important when taking photos of very difficult scenes like beach or snow because to get good photo in such cases, exposure biasing might be necessary to used. Then I fire auto-bracketing sequence using 1 EV step (sometimes 1.5 EV).
What if your camera doesn’t have auto-bracketing feature or it is very limited? In such case you will need to switch to Fully Manual mode (M), set shutter speed and aperture for your middle photo, take a photo, change shutter speed and take another photo, change shutter speed again and so on and so forth until you capture all your bracketed photos. You will also need this approach when taking long-exposure HDR photos as most commonly auto-bracketing doesn’t work with exposures longer than 30 secons.
In which mode should I shoot?
First of all, to capture good HDR photos you have to quickly forget about automatic modes if you still use them, because they usually don’t allow to set up auto-bracketing (and they are very limiting so the sooner you leave them, the better for your photography and creativity).
It is important to notice that bracketed photos have to be taken in one of 2 manual modes:
- Aperture Priority (Av, A),
- Fully Manual (M).
Why? The answer is quite simple, we have to be able to change exposure between the consecutive shots of the sequence. However, we want to change exposure time only. Changing aperture instead, could result in some bad results due to large differences in depth of field. Changing ISO in turn could result in larger noise in some photos.
How many bracketed photos you should take?
Taking right number of bracketed photos is very important as when there are too few photos then noise in the final shot can become more prominent or highlights might turn grey.
But how many photos should you take? Unfortunately the answer isn’t that simple as number of photos required to cover dynamic range of a scene varies from scene to scene. However, those photos should cover as much luminosity as possible from the brightest to the darkest parts of the frame. Sometimes it’s enough to take 1 photo (yes! Sometimes there is no need to bracket at all – in fact in such case HDR isn’t needed), sometimes 3 photos will do, sometimes, 5, 7 or even more. Of course number of photos depend on the EV spacing between the shots of the sequence but most popular steps are 1.0, 1.5, 2.0 EV.
Histogram is a graphical representation of distribution of data and it is represented as a series of adjacent rectangles which height shows value (the higher the rectangle, the bigger value it represents). In other words histogram shows you how many members out of a whole group belong to a given class or category. In photography it tells how many pixels have given value of luminosity. You can learn more about the histogram from .
As a rule of thumb remember that your darkest photo should have properly exposed highlights, and brightest photo should have shadows in the midtones part of the histogram.
Tip: there is in fact an easy way of taking right number of bracketed photos for given scene. Set your camera to Aperture priority mode and set Metering Mode to Spot Metering. Now aim your camera at brightest spot of the scene and note down shutter speed (let’s call it A). Do the same with darkest spot of your scene (and let’s call this shutter speed B). Now starting with shutter speed of A take a photo and then increase shutter speed by your chosen EV spacing (eg. 2.0 EV). Take a photo again. Repeat until you take photo with shutter speed B and then you’re done – you captured required number of bracketed photos for that scene.
What EV spacing to use between photos?
Ok, so you might ask whether you should choose 1.0, 1.5 or 2.0 EV spacing? Generally speaking using 1 EV gives smoothest tonal gradations and makes deghosting a little bit easier (more on that later). But at the same time it requires twice as many shots as using 2 EV spacing. So the answer is – it depends.
But to give you idea how this choice influences image quality head over to the section: . Also I wouldn’t consider using 0.5 EV spacing or lower – I don’t see any benefit from using it, and to cover whole dynamic range of the scene tons of photos are needed. I also wouldn’t use anything bigger than 2.0 EV spacing as such large spacing results in photos to be slightly washed out and lacking detail in my opinion (the gap is just too big).
Below there is a table with some of the most common types of scenes and number of exposures needed to properly capture them when using 1 EV spacing. If you’re using 2 EV spacing you can divide number of photos needed by 2:Scene Type Number of photos needed Landscape on foggy day 1 to 3 Landscape with clear sky 3 Landscape with sun in the frame 3 to 5 Landscape with overcast sky 3 Sunset/Sunrise 3 to 7 Forest on sunny days 3 or more Interior photos without windows 3 to 5 Interior photos with windows 7 or more
As you can see from the above table the highest number of photos is required in case of very high contrast scenes like, e.g.:
- Sunsets and sunrises,
- Forests with deep shadows and light shafts,
- Cave with exit showing outdoors,
- Indoors there are windows or doors.
What’s more odd number of photos is used most frequently as this way we have equal number of photos for shadows and highlights and one more for “mid-tones”. For 5 photos and 2 EV spacing, this situation is depicted in the image below:
As you can see from the above image, to get details in shadows it is necessary to use positive exposure compensations (eg. +2 EV, +4 EV) meaning that it is necessary to overexpose a photo (this way darkest parts of it will show enough detail). To get details in highlights it is necessary to use negative compensation in turn (eg. -2 EV, -4 EV) i.e. to take underexposed shot (this way brightest parts of it will show enough detail).
Avoid taking too many bracketed photos!
Above discussion could give you impression that you should take as many images as possible. But the more photos, the better approach isn’t ideal either because:
- The more photos are taken, the longer it takes to shoot them all. It can lead to noticeable differences between the first and the last photo of the sequence caused by movement in the scene (wind, movement of people and vehicles) what can lead to artifacts known as ghosts.
- There might be no visual difference between 5 and 50 photos (if 5 photos are sufficient to cover dynamic range of the scene; 50 won’t make it any better). In this case it’s even possible that image quality will be degraded due to ghosting artifacts mentioned in the previous point.
- The more photos you use, the more memory is required to process them and the more time it takes (50 photos would require a lot of memory – believe me 🙂 ).
- When shooting hand-held it is very difficult to take more than 3 shots and have them properly aligned.
As already mentioned these photos are then merged into HDR photo i.e. into a photo having much wider dynamic range than any of the input images. In Photomatix Pro and a number of other applications it’s all about loading whole sequence and the merging process is fully automatic (although we still have opportunity of setting some options to influence it).
What about single photo HDR?
Probably you have seen images that were called HDRs, despite the fact only one frame was used to capture them (so no bracketed photos were used). Technically speaking such images aren’t High Dynamic Range photos.
Although bracketed shots give best results in most cases (unless one photo covers whole dynamic range of the scene), Photomatix Pro and other applications allow user to load and tone map a single photo as well. It doesn’t even need to be RAW. It can be both 16- and 8-bit TIFF or even JPEG file. The benefit of using a single exposure is that it allows us to shoot handheld and eliminates a problem of ghosts completely. Of course it won’t be a real HDR photo but the results are often still quite good.
Below is example photo taken on Fuerteventura island and tonemapped from a single file:
What’s more, some single photos can really benefit from tone-mapping them and one of the examples I’d like to mention here are star photos (example is above) – I found out that if you tone-map the sky it brings out the stars and Milky Way, even if it was rather faint in the original capture.
Now we know more less what HDR photography is and why we need it.
I already mentioned that we need some tool to create HDR image from bracketed photos and then to tone-map it. At the moment there are a lot of options here, each of the programs offers slightly different capabilities and has different tone-mapping algorithms what results in slightly different output (that’s why many photographers own more than one application). Also some applications deal better with particular scenes than others but fail at some other scenes.
Personally for almost all my HDR photos I use Photomatix Pro 5.1 (+ Lightroom + Photoshop + Topaz plug-ins for final tweaks) and this is a program on which I’m focusing in this tutorial entirely. I tried many other apps but Photomatix Pro gives me the look I’m after so I don’t see any need to change it.
You can get Photomatix Pro here:
However, many concepts and ideas from this tutorial can be used in other applications too. Also if you don’t have Photomatix Pro but would like to follow what I’m doing in this tutorial, you can download free trial of it from HDRsoft website.
Note that this trial never expires but it will add watermark to your tone-mapped and fused images. Good thing is that if you decide to purchase Photomatix later, you will be able to remove this watermark – no need to reprocess your images.
There is also Photomatix Essentials (formerly known as Photomatix Light) – now in version 4.0 – which is slightly easier to use for beginners yet it uses the same powerful algorithms as the Photomatix Pro version so you can achieve similar results with both applications.
Before we dive into post-processing HDR images in Photomatix Pro, I’d like to mention a few issues that you will surely encounter sooner or later. Dealing with them is easy but it’s better to know how to do that beforehand.
As you probably know, HDR photography is unfortunately rather infamous for a few issues present in many photos. Thanks to them a lot of people assume that each HDR photo has them. However, all these issues can be quite easily solved and the fact they are present in many photos is due to mistakes done by photographers, not due to HDR photography itself.
Below I list these typical problems and give short description of each of them. You will find more comprehensive description on how to get rid of each of them in further parts of this tutorial (eg. when describing settings I use for tone-mapping).
Concerning issues the first big one is noise. If we use local tone-mapping operators (like Details Enhancer in Photomatix Pro for instance) it is essential to pay extra attention to this. As local tone-mapping operators enhance local details they will enhance noise at the same time (as there is no way to distinguish between noise and a very detailed texture). To prevent this do the following:
- Cover whole dynamic range of the scene. If there will be enough information about shadows then noise won’t be prominent. It means that if the brightest photo in the scene doesn’t expose shadows enough then noise from it will be transferred to the final image. As I mentioned before, your brightest photo should have shadows in the midtones part of the histogram.
- Use low ISO values whenever possible. But it doesn’t mean lowest values as in some cases ISO 50 or ISO 100 can have more noise than ISO 200. Check your camera lowest native ISO and use it when taking HDR photos.
- Avoid using too strong settings, especially ones that enhance local detail. It is especially important if you used higher ISO values (despite what I wrote above).
- Reduce noise in your source images prior to loading them into Photomatix Pro. If there won’t be any noise in the source images, it won’t be amplified.
Next thing is vertical and horizontal movement between the shots of the bracketed sequence caused by shooting hand-held or when shooting with a tripod in difficult windy conditions. This can cause issues with photo alignment.
- To minimize this movement it is a good idea to use a sturdy tripod and to use remote shutter release (through a cable or pilot).
- If there is still movement (or you had to shoot hand-held), use alignment option in Photomatix Pro.
Oversaturated lookAnother common mistake is to drag saturation sliders in HDR software all the way to the maximum value. This makes colors to have this grunge or surreal look – they scream HDR making photo look very unrealistic! In case of Details Enhancer, for Saturation I use values in range 40 – 50 and for other processing method I usually use default values.
You have to be aware that in case of Details Enhancer this slider is a bit different than Saturation in eg. Lightroom or Photoshop in a sense that other settings affect its behaviour, eg. using lower value for Strength allows you to use higher values for Saturation. Using higher values for Strength in turn requires reducing Saturation to keep the realistic look.
Another thing is that particular colors (especially reds and greens) might still look oversaturated despite using rather low Saturation value in Photomatix. The fix is very easy. You can use Finishing Touch in Photomatix or Saturation sliders in Lightroom (or Hue/Saturation Adjustment Layer in Photoshop CS) to decrease saturation of these particular colors.
Take a look at the example photo on the right. The blues in the sky and reds of the tram are very unnatural in this case (way oversaturated). In this case I would slightly decrease both Vibrance and Saturation in Lightroom after processing my image in Photomatix Pro.