I’m asking what I fear may be a very basic question. I just took several series of exposure bracketed images, all with the same manual camera settings, same list of shutter speeds, etc., under different lighting conditions, and converted them to HDR using hdrgen
. I expected the output HDR file to all have the same header information. However, when I call getinfo
on the HDR files, they all have slightly different exposure values, and the exposure values also change if I cull overexposed JPG files before running hdrgen
. So what is the exposure value, and how is it meant to be used?
And a side question, does anyone have documentation for the hdrgen
command-line arguments?
Thanks!
Hi Nathaniel,
In general, hdrgen should create the same header EXPOSURE= line for identically-exposed input images, provided nothing (ISO, speed, aperture) changes for the two exposure sets, and the -x option is not used. (See man page here for options.)
Did you check that your camera does not automatically adjust the ISO setting? Some do, even in manual mode, and this is something to avoid. You can double-check the settings used either with Photoshop, or Preview, or exiftool.
Hdrgen takes the settings from the central exposure in a sequence, and uses this to determine the final multiplier for the HDR output. You can of course adjust the exposure subsequently to match them to each other if need be.
Hope this helps!
-Greg
Hi Greg,
That generally makes sense. So should the raw RGBE values stored in the HDR file be multiplied by the exposure value to obtain the radiance value at each pixel? Does exposure have some units to it, or is it unitless?
I also noticed that when I use pfilt
to shrink the image (by a factor of 8 in each dimension), pfilt
appends a new exposure value to the HDR file header, which is about two orders of magnitude greater than the exposure value calculated by hdrgen
. Why is this?
Hi Nathaniel,
The exposure multipliers are unitless. All left-justified EXPOSURE= lines in the header should be multiplied together to get the actual exposure that was applied to the image subsequent to absolute calibration / rendering. With no exposure, the floating-point RGB values in a radiance (lower case) image should be in watts/sr/meter^2 averaged over some waveband. If you have a conversion to luminance (e.g., the standard is .265R + .670G + .065B in Radiance), then this value, multiplied by 179 lumens/watt and divided by the cumulative exposure, gets you to absolute candelas/meter^2.
Cameras tend to use CCIR-709 primaries, whose conversion works out to:
Luminance = (.213*R + .715*G + .072*B)*179/(product_of_exposures)
An important caveat is that XYZE picture files, as produced by ra_xyze, do not need the 179 lumens/watt conversion factor.
Your pfilt command was probably without a -1 option, so it calculated an exposure over the image to get pixel values to average to 0.5. This new exposure when multiplied by the original EXPOSURE= line gives you a combined exposure for calibration.
This is all rather more complicated than it should be, mostly for historical reasons. Newer HDR formats use a “sample-to-nits” conversion factor stored in metadata in place of the cumulative EXPOSURE= lines.
Hope this helps!
-Greg
Hi @Greg_Ward,
This is all very helpful, thanks. I’m experimenting with a new camera, and whatever calibration it has built into it allows me to make HDR files with hdrgen
that come very close to measured luminance levels. However, I’ve noticed that the camera response function generated by hdrgen
comes out slightly different every time. Sometimes it generates 4 coefficients per channel, sometimes 3.
- Am I right to understand that a line like
3 1.08609 -0.464786 0.379661 -0.000969182
in the response file indicates 1.08609*x^3 - 0.464786*x^2 + 0.379661*x - 0.000969182
?
- To get a response file that I can apply broadly, what is the ideal scene to feed
hdrgen
?
Hi Nathaniel,
hdrgen should be used once to generate a good calibration, then this file should be re-used if possible. It can be a polygon of order 2, 3, 4, or 5, and it will vary for different inputs. Your interpretation of the coefficients is correct. They always add to 1, but you can multiply the coefficients by a constant to get a more exact absolute calibration for a particular camera. Here’s a tip from the file “quickstart_pf.txt” that accompanies Photosphere on how to set up a scene for HDR calibration:
- To create a high dynamic-range image, you need to start with
a set of “bracketed” exposures of a static scene. It is best if
you take a series of 10 or more exposures of an interior scene looking
out a window and containing some large, smooth gradients both inside
and outside, to determine the camera’s natural response function.
Be sure to fix the camera white balance so it doesn’t change, and
use aperture-priority or manual exposure mode to ensure that only
the speed is changing from one exposure to the next. For calibration,
you should place your camera on a tripod, and take your exposure series
starting from the longest shutter time and working to the shortest in
one-stop increments. Once you have created your image series,
load it into Photosphere directly – DO NOT PROCESS THE IMAGES
WITH PHOTOSHOP or any other program. Select the thumbnails,
then go to the “File → Make HDR…” menu. Check the box that says
“Save New Response”, and click “OK”. The HDR building process
should take a few minutes, and Photosphere will record the computed
response function for your camera into its preferences file, which
will save time and the risk of error in subsequent HDR images.
You will also have the option of setting an absolute calibration
for the camera if you have a measured luminance value in the scene.
This option is provided by the “Apply” button submenu when the
measured area is selected in the image. (Click and drag to select.)
Once an HDR image has been computed, it is stored as a temporary
file in 96-bit floating-point TIFF format. This file is quite
large, but the data will only be saved in this format if you
select maximum quality and save as TIFF. Otherwise, the 32-bit
LogLuv TIFF format will be preferred (or the 24-bit LogLuv format
if you set quality to minimum). You also have the option of saving
to the more common Radiance file format (a.k.a. HDR format), or
ILM’s 48-bit OpenEXR format. The newest format supported is an
extended JPEG, that takes just a little more space than a standard
JPEG and retains all the HDR information. If you choose not to save
the image in high dynamic-range, the tone-mapped display image can be
written out as a 24-bit TIFF or a standard JPEG image.
Continuing the exposure question with some further confusion, I’ve been experimenting with using FreeImage to manipulate HDR files, and found that it ignores the EXPOSURE= line when reading the HDR files and writes EXPOSURE=0 when writing them. Is EXPOSURE=0 legal? I expected that other Radiance programs would get upset if I fed them an HDR image with EXPOSURE=0, but most do not. However, pfilt
warns me “picture too dark or too bright.”
Edit: It looks like someone has reported this as a bug in FreeImage, but the update isn’t available for Python yet.
On utilities like pcomb, pvalue, rmtxop, and the new rcomb commmand, EXPOSURE=0 may cause divide-by-zero warnings and deliver black images with some option settings. So no, it’s not legal, and I’m not sure why they chose that as the default.
You can override it with “getinfo -r” to remove or replace it with a different value. The EXPOSURE setting is the scale factor from a calibrated radiance/irradiance value to whatever values are in the image.
-G
Hi Greg,
This topic is very useful.
I have a question for ‘product_of_exposures’ when I read this post.
Let’s say I have five 5 images (Shutter Speed: 4, 1, 1/4
1/15, 1/60) to be merged into a single HDR, then R, G, and B values are known.
I plan to compute luminance on my own. How to get ‘product_of_exposures’?
My hypothesis is it requires some conversion with other camera settings.
Thanks,
I can refer you to this very old post from the original Radiance website:
filmspeed.html — Radsite
Cheers,
-Greg