A camera’s image sensor has one job – to record light. However, sensors generally can only capture a limited range of light from shadows to highlights. When the actual range exceeds the sensor’s ability, that’s “high dynamic range” or HDR.
Here are two recent examples where the range of light exceeded my camera sensor’s ability. The first is a sunset. No surprise – the highlights are super bright. The second example is less obvious – the surface of a lake reflects blue sky in some areas and elsewhere the light simply falls off to black.
The solution is the same. Capture multiple exposures and then combine them together as a matter of post-processing. Many cameras have this post-processing and a built-in option. Even my smartphone camera includes that feature. The results may be disappointing. My own experience with camera built-in HDR processing is 50/50 at best. The end result is so commonly disappointing that I routinely don’t trust the camera to do it. Instead, I do HDR post-processing using software in a desktop computer.
This technique generally requires that the camera doesn’t move when capturing the separate exposures. The composition of the two captures is exactly the same. If the camera moves slightly, that is commonly not a problem because the two can be aligned during post.
This technique doesn’t work with video. When shooting video, the camera angle usually changes during the shoot; to shoot the scene a second time will result in a different video composition. Two captures will never align. For video, the solution to HDR is different – capture the shot just once but use a special camera mode that is very low-contrast, often referred to as DLog. Straight out of the camera, that shot looks truly awful. It must be post-processed, expanding the contrast range to something that appears correct.