HDRI - Camera Response Curve

Hi All,

Happy New Year first off.

I am making a first pass through the "High Dynamic Range Imaging" book that Greg co-authored. I am wondering about methods for generating a good response curve for a given camera. The book gives a variety of tips and suggestions, however I am curious about good scenes.

In a prior email footnote from Greg (a faq item relating to camera response curves and Photosphere), he suggests shooting a scene looking out a window during daytime with about 10 exposures. What about shooting a controlled scene such as a Macbeth Color Checker or a Kodak gray scale chart (can't remember the name of this) outdoors under daytime conditions or using some type of interior fixed lighting setup?

-Jack

ยทยทยท

--
# Jack de Valpine
# president
#
# visarc incorporated
# http://www.visarc.com
#
# channeling technology for superior design and construction

1 Like

Hi Jack,

I've just been playing around with this, myself. To get a good response curve, it is best to start with a scene that has a daylight color balance, as this is the design point for most cameras, and some models end up boosting the blue too much if you calibrate under incandescent lighting. For this reason, I prefer calibrating with natural light, as opposed to a controlled condition with incandescent or fluorescent lighting. In any case, you should fixed a white balance setting on the camera appropriate to the test lighting. (Memory aid: If the white balance isn't fixed, then it's broken.)

Color charts are of limited use, except as a means to verify your calibration. The more images you take in a sequence, and the more closely they are spaced, the less you rely on the recorded camera response function. This is because the overlap of the images serves (together with the known shutter speeds) to give you an accurate result, regardless. It is more important to have an absolute calibration value if you are after real numbers, and for this a luminance measurement on a reference card of known reflectance is invaluable.

-Greg

ยทยทยท

From: Jack de Valpine <[email protected]>
Date: January 9, 2006 9:19:54 AM PST

Hi All,

Happy New Year first off.

I am making a first pass through the "High Dynamic Range Imaging" book that Greg co-authored. I am wondering about methods for generating a good response curve for a given camera. The book gives a variety of tips and suggestions, however I am curious about good scenes.

In a prior email footnote from Greg (a faq item relating to camera response curves and Photosphere), he suggests shooting a scene looking out a window during daytime with about 10 exposures. What about shooting a controlled scene such as a Macbeth Color Checker or a Kodak gray scale chart (can't remember the name of this) outdoors under daytime conditions or using some type of interior fixed lighting setup?

-Jack

Hi Greg,

Thanks for the follow-up on this. I think that I have gotten the white balance part correct. I always set it by hand (for example to the sun icon) so the auto white balance feature is effectively off. And I have used aperture priority on the camera however I have used auto bracketing, so maybe I am not getting enough samples? It sound like perhaps the best thing to do is set the aperture manually and vary the exposure time manually as well in order to generate your suggested ~10 samples. Although doing everything manually involves a lot of unnecessary touch of the camera.

So as to scene, clear sunny sky conditions would be ideal I suppose with a range from shadowed features to direcly illuminated features... Sorry for this next... So how do you use the luminance measure from the reference card to further calibrate the response curve?

Once a response curve has been generated is there some way to check or validate it (if that makes any sense)?

In the book (the new one;-) you also indicate that the darkest exposure should have no RGB values greater than ~200 and the lightest no values less than ~20. I could certainly figure out a filter routine with pcomb, however there must be a simple way to do this with ImageMagick. The point being to run a quick preprocess check on the the bounding images. Any takers or suggestions for how to do this easily?

-Jack

Gregory J. Ward wrote:

ยทยทยท

Hi Jack,

I've just been playing around with this, myself. To get a good response curve, it is best to start with a scene that has a daylight color balance, as this is the design point for most cameras, and some models end up boosting the blue too much if you calibrate under incandescent lighting. For this reason, I prefer calibrating with natural light, as opposed to a controlled condition with incandescent or fluorescent lighting. In any case, you should fixed a white balance setting on the camera appropriate to the test lighting. (Memory aid: If the white balance isn't fixed, then it's broken.)

Color charts are of limited use, except as a means to verify your calibration. The more images you take in a sequence, and the more closely they are spaced, the less you rely on the recorded camera response function. This is because the overlap of the images serves (together with the known shutter speeds) to give you an accurate result, regardless. It is more important to have an absolute calibration value if you are after real numbers, and for this a luminance measurement on a reference card of known reflectance is invaluable.

-Greg

From: Jack de Valpine <[email protected]>
Date: January 9, 2006 9:19:54 AM PST

Hi All,

Happy New Year first off.

I am making a first pass through the "High Dynamic Range Imaging" book that Greg co-authored. I am wondering about methods for generating a good response curve for a given camera. The book gives a variety of tips and suggestions, however I am curious about good scenes.

In a prior email footnote from Greg (a faq item relating to camera response curves and Photosphere), he suggests shooting a scene looking out a window during daytime with about 10 exposures. What about shooting a controlled scene such as a Macbeth Color Checker or a Kodak gray scale chart (can't remember the name of this) outdoors under daytime conditions or using some type of interior fixed lighting setup?

-Jack

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

--
# Jack de Valpine
# president
#
# visarc incorporated
# http://www.visarc.com
#
# channeling technology for superior design and construction

Hi Jack, all, Happy Happy, yadda yadda...

Jack de Valpine wrote:

Hi Greg,

Thanks for the follow-up on this. I think that I have gotten the white balance part correct. I always set it by hand (for example to the sun icon) so the auto white balance feature is effectively off. And I have used aperture priority on the camera however I have used auto bracketing, so maybe I am not getting enough samples? It sound like perhaps the best thing to do is set the aperture manually and vary the exposure time manually as well in order to generate your suggested ~10 samples. Although doing everything manually involves a lot of unnecessary touch of the camera.

Yeah, for this calibration sequence, you definitely need more samples than a typical auto-bracket will do. Fixed aperture, varied exposure, and shoot from a tripod if you can. The Canon Rebels are really nice because you can adjust the exposure with a little roller wheel; once you have the WB and aperture set, you just roll the wheel & click, roll the wheel & click, etc. My Olympus requires a lot more futzing with menus to do manual exposure and it's impossible to shoot a decent sequence w/o a tripod.

Once a response curve has been generated is there some way to check or validate it (if that makes any sense)?

I dunno about validating a response curve, but I'd expect you could measure the scene with a luminance meter while you're shooting it, and then compare those values to some samples from the hdr image.

In the book (the new one;-) you also indicate that the darkest exposure should have no RGB values greater than ~200 and the lightest no values less than ~20. I could certainly figure out a filter routine with pcomb, however there must be a simple way to do this with ImageMagick. The point being to run a quick preprocess check on the the bounding images. Any takers or suggestions for how to do this easily?

I just open the high & low images in Photoshop and look at the histogram. Are you trying to figure out a way to automate this step from the command line?

Jack,

It sound like
perhaps the best thing to do is set the aperture manually and vary the
exposure time manually as well in order to generate your suggested ~10
samples. Although doing everything manually involves a lot of
unnecessary touch of the camera.

In my experience, this is indeed the best thing to do.

So as to scene, clear sunny sky conditions would be ideal I suppose with
a range from shadowed features to direcly illuminated features... Sorry
for this next... So how do you use the luminance measure from the
reference card to further calibrate the response curve?

Have a look at
http://luminance.londonmet.ac.uk/webhdr/calibrate.shtml

In the header of the RGBE image you'll find an exposure value. Multiply
this with your calibration factor. This CF will be roughly one, but not
quite.

In the book (the new one;-) you also indicate that the darkest exposure
should have no RGB values greater than ~200 and the lightest no values
less than ~20. I could certainly figure out a filter routine with pcomb,
however there must be a simple way to do this with ImageMagick. The
point being to run a quick preprocess check on the the bounding images.
Any takers or suggestions for how to do this easily?

The netpbm utilities allow you to easily create histograms. I use this in
WebHDR: For each uploaded image, the histogram is plotted, and the
exposure value given. Give it a try, if you like.

I understand that ImageMagick can also give you histograms.

Cheers

Axel

Hi Axel and Rob,

Thanks for the follow-up.

Axel Jacobs wrote:

So as to scene, clear sunny sky conditions would be ideal I suppose with
a range from shadowed features to direcly illuminated features... Sorry
for this next... So how do you use the luminance measure from the
reference card to further calibrate the response curve?
ย ย ย 
Have a look at
http://luminance.londonmet.ac.uk/webhdr/calibrate.shtml

In the header of the RGBE image you'll find an exposure value. Multiply
this with your calibration factor. This CF will be roughly one, but not
quite.

Hey this is great! Though I am still not exactly clear what to do with the CF if it is other than 1.0? Am I using this multiplier to correct the exposure value of the image?

In the book (the new one;-) you also indicate that the darkest exposure
should have no RGB values greater than ~200 and the lightest no values
less than ~20. I could certainly figure out a filter routine with pcomb,
however there must be a simple way to do this with ImageMagick. The
point being to run a quick preprocess check on the the bounding images.
Any takers or suggestions for how to do this easily?
ย ย ย 
The netpbm utilities allow you to easily create histograms. I use this in
WebHDR: For each uploaded image, the histogram is plotted, and the
exposure value given. Give it a try, if you like.

Great! I will check it out.

ยทยทยท

--
# Jack de Valpine
# president
#
# visarc incorporated
# http://www.visarc.com
#
# channeling technology for superior design and construction

Jack,

Hey this is great! Though I am still not exactly clear what to do with
the CF if it is other than 1.0? Am I using this multiplier to correct
the exposure value of the image?

Yes. You multiply the exposure value in the RADIANCE header with this
Calibration Factor.

Axel

Hi Jack, Axel & Rob:

From: Jack de Valpine <[email protected]>

Axel Jacobs wrote:

So as to scene, clear sunny sky conditions would be ideal I suppose with a range from shadowed features to direcly illuminated features... Sorry for this next... So how do you use the luminance measure from the reference card to further calibrate the response curve?

Have a look at http://luminance.londonmet.ac.uk/webhdr/calibrate.shtml In the header of the RGBE image you'll find an exposure value. Multiply this with your calibration factor. This CF will be roughly one, but not quite.

Hey this is great! Though I am still not exactly clear what to do with the CF if it is other than 1.0? Am I using this multiplier to correct the exposure value of the image?

You can actually edit the header using the getinfo command in the following sequence:

getinfo < orig.hdr > fixed.hdr
vi fixed.hdr
getinfo - < orig.hdr >> fixed.hdr

In your editor (vi in the example above), either replace the EXPOSURE= line with a corrected value, or add a new EXPOSURE= line with the correciton, equal to image_value/measured_value.

Photosphere makes this easy by allowing you to select a region and enter the correct luminance value via the pop-up "Calibration" menu item on the "Apply" button of the image display window. You also have the option of applying this calibration factor to the camera's recorded response function, so it will adjust future exposure sequences from the same camera.

In the book (the new one;-) you also indicate that the darkest exposure should have no RGB values greater than ~200 and the lightest no values less than ~20. I could certainly figure out a filter routine with pcomb, however there must be a simple way to do this with ImageMagick. The point being to run a quick preprocess check on the the bounding images. Any takers or suggestions for how to do this easily?

The netpbm utilities allow you to easily create histograms. I use this in WebHDR: For each uploaded image, the histogram is plotted, and the exposure value given. Give it a try, if you like.

Great! I will check it out.

Photosphere also has a histogram feature, as well as live false color luminance display. It might be worth $500 to get a Mac mini -- cheaper than a copy of Photoshop.

-Greg

Hi Jack,

Is it you how is a 'Digial Rebel'? If so, then I never realised you were
into German Nutcrackers. Nice view outta your window.

How did you find the interactive Luminance Map? Did it work for you?

Axel

Hi Axel,

I guess the rest of us missed the message you're referring to, so maybe this was posted to the group by mistake...

The false color feature of Photosphere is new, thanks to the generous support of the Lawrence Berkeley Laboratory, under a contract with NYSERDA for the New York Times. You need to download the latest version from www.anyhere.com and find it in the "View" menu.

-Greg

ยทยทยท

From: "Axel Jacobs" <[email protected]>
Date: January 9, 2006 5:11:17 PM PST

Hi Jack,

Is it you how is a 'Digial Rebel'? If so, then I never realised you were
into German Nutcrackers. Nice view outta your window.

How did you find the interactive Luminance Map? Did it work for you?

Axel

Apologies to the list, my last message should have gone straight to Jack.

Axel

Hi all,

I thought I might as well add another thought to this thread:

Given a dozen or so subtly different RSP files for the same camera, I am
wondering how one would go about combining them into one, averaging them,
so to say.

Gutfeeling tells me that it's probably not as simple as taking the
arithmetic average of the 12 individual coefficients across the dozen
RSPs.

Axel

Hey Greg,

Thanks for the added info.

Photosphere also has a histogram feature, as well as live false color luminance display. It might be worth $500 to get a Mac mini -- cheaper than a copy of Photoshop.

You know this might actually be too hard to pass up!

-Jack

ยทยทยท

--
# Jack de Valpine
# president
#
# visarc incorporated
# http://www.visarc.com
#
# channeling technology for superior design and construction

Hi Axel,

Axel Jacobs wrote:

Hi Jack,

Is it you how is a 'Digial Rebel'? If so, then I never realised you were
into German Nutcrackers. Nice view outta your window.

Sorry, not sure what you are referring to here?

ยทยทยท

How did you find the interactive Luminance Map? Did it work for you?

Axel

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

--
# Jack de Valpine
# president
#
# visarc incorporated
# http://www.visarc.com
#
# channeling technology for superior design and construction

Greg et al -

Regarding Photosphere and the creation of HDR images from multiple exposure LDR images: is there any advantage to the process by using higher bit RAW or TIFF (12-16 bit) linear LDRs when combining them into an HDR, or does it not matter or actually make the process harder or less efficient, etc.? Just curious.

I've been reading and rereading the Tone Mapping chapters in the new book trying to get a theoretical and physical basis for the machinery working behind the scenes in Photosphere/hdrgen. Pretty cool stuff.

Thanks,

kirk

ยทยทยท

------------------------------

Kirk L. Thibault, Ph.D.
[email protected]

p. 215.271.7720
f. 215.271.7740
c. 267.918.6908

skype. kirkthibault

On Jan 9, 2006, at 12:19 PM, Jack de Valpine wrote:

Hi All,

Happy New Year first off.

I am making a first pass through the "High Dynamic Range Imaging" book that Greg co-authored. I am wondering about methods for generating a good response curve for a given camera. The book gives a variety of tips and suggestions, however I am curious about good scenes.

In a prior email footnote from Greg (a faq item relating to camera response curves and Photosphere), he suggests shooting a scene looking out a window during daytime with about 10 exposures. What about shooting a controlled scene such as a Macbeth Color Checker or a Kodak gray scale chart (can't remember the name of this) outdoors under daytime conditions or using some type of interior fixed lighting setup?

-Jack

--
# Jack de Valpine
# president
#
# visarc incorporated
# http://www.visarc.com
#
# channeling technology for superior design and construction

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Hi Kirk,

There generally isn't much difference between camera RAW and JPEG in terms of resolution. The main difference is that we don't need to second-guess what the camera is doing or figure out the response function (in most cases).

Working on the NYSERDA project again, I've developed a prototype script based on Dave Coffin's dcraw.c that converts camera RAW directly into 24-bit BMP with a known gamma, then takes this into hdrgen to compute the final HDR image. I have done numerous experiments with this process and compared them to dealing only with JPEGs, and there's not a lot of difference between the two methods. In general, I would say that the RAW translation offers better color fidelity, but suffers from greater noise. The color difference is attributable to the odd color manipulations that happen in the camera. The noise difference no doubt has a lot to do with the noise reduction that happens in many digital cameras, which is bypassed using RAW directly.

-Greg

P.S. Bear in mind that although most RAW image files are 12 bits, they are 12 bits of linear range, which isn't really any better than 8 bits using a standard 2.2 gamma.

ยทยทยท

From: Kirk Thibault <[email protected]>
Date: January 10, 2006 7:58:47 AM PST

Greg et al -

Regarding Photosphere and the creation of HDR images from multiple exposure LDR images: is there any advantage to the process by using higher bit RAW or TIFF (12-16 bit) linear LDRs when combining them into an HDR, or does it not matter or actually make the process harder or less efficient, etc.? Just curious.

I've been reading and rereading the Tone Mapping chapters in the new book trying to get a theoretical and physical basis for the machinery working behind the scenes in Photosphere/hdrgen. Pretty cool stuff.

Thanks,

kirk

Makes sense. I was curious about this simply because I wondered how robust the Camera Response Curves are in accounting for camera-based pre-processing. Some cameras do things to the RAW data to enhance contrast, boost saturation etc. and then write the JPEG. Some cameras allow the user to define the amount of this processing (I'm thinking of the Canon Digital Rebel that I shoot with, but I imagine that many cameras now offer user control of these parameters in some form). Do the camera response curves essentially account for these settings inasmuch as they affect the way the camera "exposes" the RAW image data from the sensor (referring to your statement below about second-guessing what the camera is doing)? I know that generating HDR images from multi-exposure LDR images "demands" that auto-white balance not be used - are there similar suggestions for the camera-based pre-processing parameters like contrast, saturation, etc. that may affect HDR generation or is that essentially what the curves compensate for (assuming that the same exact pre-processing camera parameters are applied to each image)? Because these pre-processing algorithms are automated, would they necessarily be applied consistently across the entire range of exposures or could they introduce some sort of changing response similar to a changing white balance that might affect the combination of the LDRs and the extraction of a response curve?

I suppose I could experiment by setting the parameters on my Digital Rebel to combinations of extrema in a relatively controlled lighting situation and see if it matters, but I'm not sure I would know what to look for to measure any differences that might occur. Clearly my limited understanding of the physics is being exposed here (my limited understanding has a very large apparent dynamic range) - which may mean that in my dimwitted approach, it may not matter anyway!
:slight_smile:

Thanks,

kirk

ยทยทยท

------------------------------

Kirk L. Thibault, Ph.D.
[email protected]

p. 215.271.7720
f. 215.271.7740
c. 267.918.6908

skype. kirkthibault

On Jan 10, 2006, at 11:56 AM, Greg Ward wrote:

Hi Kirk,

There generally isn't much difference between camera RAW and JPEG in terms of resolution. The main difference is that we don't need to second-guess what the camera is doing or figure out the response function (in most cases).

Working on the NYSERDA project again, I've developed a prototype script based on Dave Coffin's dcraw.c that converts camera RAW directly into 24-bit BMP with a known gamma, then takes this into hdrgen to compute the final HDR image. I have done numerous experiments with this process and compared them to dealing only with JPEGs, and there's not a lot of difference between the two methods. In general, I would say that the RAW translation offers better color fidelity, but suffers from greater noise. The color difference is attributable to the odd color manipulations that happen in the camera. The noise difference no doubt has a lot to do with the noise reduction that happens in many digital cameras, which is bypassed using RAW directly.

-Greg

P.S. Bear in mind that although most RAW image files are 12 bits, they are 12 bits of linear range, which isn't really any better than 8 bits using a standard 2.2 gamma.

From: Kirk Thibault <[email protected]>
Date: January 10, 2006 7:58:47 AM PST

Greg et al -

Regarding Photosphere and the creation of HDR images from multiple exposure LDR images: is there any advantage to the process by using higher bit RAW or TIFF (12-16 bit) linear LDRs when combining them into an HDR, or does it not matter or actually make the process harder or less efficient, etc.? Just curious.

I've been reading and rereading the Tone Mapping chapters in the new book trying to get a theoretical and physical basis for the machinery working behind the scenes in Photosphere/hdrgen. Pretty cool stuff.

Thanks,

kirk

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Hi Kirk,

You've jammed a lot of questions in here, so let me try to take them one or two at a time....

From: Kirk Thibault <[email protected]>
Date: January 10, 2006 9:36:39 AM PST

Makes sense. I was curious about this simply because I wondered how robust the Camera Response Curves are in accounting for camera-based pre-processing. Some cameras do things to the RAW data to enhance contrast, boost saturation etc. and then write the JPEG. Some cameras allow the user to define the amount of this processing (I'm thinking of the Canon Digital Rebel that I shoot with, but I imagine that many cameras now offer user control of these parameters in some form). Do the camera response curves essentially account for these settings inasmuch as they affect the way the camera "exposes" the RAW image data from the sensor (referring to your statement below about second-guessing what the camera is doing)?

Essentially correct. So long as you stick to the same settings for your exposure sequences, and the camera is not automatically adjusting the tone curve, you should get reproducible results out of Photosphere/hdrgen. Typically, the aperature-preferred or manual mode you will use for exposure bracketing precludes automatic tone-curves.

I know that generating HDR images from multi-exposure LDR images "demands" that auto-white balance not be used - are there similar suggestions for the camera-based pre-processing parameters like contrast, saturation, etc. that may affect HDR generation or is that essentially what the curves compensate for (assuming that the same exact pre-processing camera parameters are applied to each image)?

Consistency is the main thing. After that, you are better off without aggressive color and tone enhancement, and sharpening. Boosting color saturation (as many cameras like to do) intermingles the color channels in nasty, non-invertible ways. Boosted contrast limits the dynamic range of each exposure. Sharpening makes image alignment errors more noticeable.

Because these pre-processing algorithms are automated, would they necessarily be applied consistently across the entire range of exposures or could they introduce some sort of changing response similar to a changing white balance that might affect the combination of the LDRs and the extraction of a response curve?

As I said above, the aperture-preferred and manual modes of most cameras disables per-image tone dickering.

I suppose I could experiment by setting the parameters on my Digital Rebel to combinations of extrema in a relatively controlled lighting situation and see if it matters, but I'm not sure I would know what to look for to measure any differences that might occur. Clearly my limited understanding of the physics is being exposed here (my limited understanding has a very large apparent dynamic range) - which may mean that in my dimwitted approach, it may not matter anyway!

I spent the better part of a day last week taking shots of a Macbeth ColorChecker chart in the many modes of the Canon EOS 5D I have on loan from LBNL, and concluded that there were small differences to be seen, particularly in terms of color accuracy, but for most applications this is not going to be a make or break consideration. Stick with the most neutral, faithful, unaltered mode your camera offers, and JPEG should work fine.

-Greg

Thanks greg! How'd you like the Canon 5D?

ยทยทยท

------------------------------

Kirk L. Thibault, Ph.D.
[email protected]

p. 215.271.7720
f. 215.271.7740
c. 267.918.6908

skype. kirkthibault

On Jan 10, 2006, at 12:55 PM, Greg Ward wrote:

Hi Kirk,

You've jammed a lot of questions in here, so let me try to take them one or two at a time....