Problems using hdrgen for hdri generation

Hi all,

I am a student and I am new to hdri generation or processing so I need your help:)

I am trying to use hdrgen (for linux) for generating an hdr image from a series of photographs with different exposure each, of a static scene .
I faced some problems and I have some questions for the whole process..

1) First of all, I would like to know which is the best way to take photographs with different exposure.
If I understand correctly there are two ways. Either by changing F-number and let fixed the shutter speed or either by varying shutter speed and let the f-number fixed.
I would like to know witch of two ways is the best?

2) Are "Lens Aperture", "F-number" and "F-stop" same thing?
Are "exposure time" and "shutter speed" same thing?
I know that these questions may seem to be "stupid" to most of you.. but as I said I am not familiar at all, with all these ...

The following questions are for hdrgen software..
I used it with different combinations of image...
Some times the hdr image is generated but it's not clear at all and "scene" are not aligned... and some other times the hdr image is not generated at all..

3) Where is the fault
when there is a warning: Trouble finding HDR patches***** ?
when ther is a warning: Poor convergence for order 1 fit? (is 1 or any other number X appear there refers to a problem with the Xth image in command arguments?)

4) Where is the fault when there is an error:
"Cannot solve for response function" ?
Is that because it can not generate file for response function of camera, with specific series of photographs?

5) One problem of resulting hdr images (in cases that there was a generation of image)
was that they were somehow "green". What can cause that?

6) My photographs are not perfectly aligned.
As I understood, the hdrgen uses an algorithm to align the photographs.
But as I noticed when the uses of alighnment algorithm is enable (NOT use of -a option) the "alignment" getting worst..
and when I disabled it (Using -a option) the result is better.. but still no good (I guess because that original photographs are not aligned)

Is there anything I can do for that???
Does the algorithm has "limits" on how much dis-aligned can the original images be?

I would appreciate it if you can answer some (or all;-) ), of these questions.
Regards,
Despina

1 Like

Hi,
here something:

-1- you need to change time and NOT aperture, since depth of field changes
with aperture settings.
(depth of field could be simply explained as the parameter that indicates
how we are able to see more things in focus other than just the foreground)
furthermore close apertures eliminate some lens artefacts like coma (light
sources are elongated when close to the frame of the picture, expensive
lenses have not this problem though)...
as a general suggestion for luminance measurement I would recommend to use
big F numbers.

-2- aperture=F stop=F number <> time, exposure time,shutter speed

basically there is a curtain inside the camera that moves:
this is the shutter and takes care of let in the light
(speed is indicated in seconds and fraction of seconds, 1/60 is 1/60 of
second sometimes it is written as 60)

there is a diaphragm inside lenses,
this close when the picture is being taken, leaving just an hole in the
middle, the hole dimension is the aperture.
the smaller the hole the bigger is aperture number (it's a fraction 1/1.4
1/2 1/2.8 .... 1/16 1/22 1/32 ... 1/64)
the smaller the hole the bigger depth of field

-3a- 'trouble about HDR patches' doesn't seem to be a problem, it is a
warning and results are always fine... sounds like 'warning no light source
found' and you are calculating a DF... ;-))

-3b- this happens when images have not enough data to create the camera
calibration curves...
may be others can explain in depth....

-4- this could be because the images are too different or some exif data are
missing (sometimes this happens because the software that imports images
overwrite exif data)

-5- no idea

-6- I found the same thing, however a good tripod 'fixed' all my problems
;-))

Hope it helps,
cheers,
pillo

ยทยทยท

-----Original Message-----
From: [email protected]
[mailto:[email protected]]On Behalf Of Despina
Michael
Sent: 06 January 2005 17:08
To: [email protected]
Subject: [Radiance-general] Problems using hdrgen for hdri generation

Hi all,

I am a student and I am new to hdri generation or processing so I need your
help:)

I am trying to use hdrgen (for linux) for generating an hdr image from a
series of photographs with different exposure each, of a static scene .
I faced some problems and I have some questions for the whole process..

1) First of all, I would like to know which is the best way to take
photographs with different exposure.
If I understand correctly there are two ways. Either by changing F-number
and let fixed the shutter speed or either by varying shutter speed and let
the f-number fixed.
I would like to know witch of two ways is the best?

2) Are "Lens Aperture", "F-number" and "F-stop" same thing?
Are "exposure time" and "shutter speed" same thing?
I know that these questions may seem to be "stupid" to most of you.. but as
I said I am not familiar at all, with all these ..

The following questions are for hdrgen software..
I used it with different combinations of image...
Some times the hdr image is generated but it's not clear at all and "scene"
are not aligned... and some other times the hdr image is not generated at
all..

3) Where is the fault
when there is a warning: Trouble finding HDR patches***** ?
when ther is a warning: Poor convergence for order 1 fit? (is 1 or any other
number X appear there refers to a problem with the Xth image in command
arguments?)

4) Where is the fault when there is an error:
"Cannot solve for response function" ?
Is that because it can not generate file for response function of camera,
with specific series of photographs?

5) One problem of resulting hdr images (in cases that there was a generation
of image)
was that they were somehow "green". What can cause that?

6) My photographs are not perfectly aligned.
As I understood, the hdrgen uses an algorithm to align the photographs.
But as I noticed when the uses of alighnment algorithm is enable (NOT use of
-a option) the "alignment" getting worst..
and when I disabled it (Using -a option) the result is better.. but still no
good (I guess because that original photographs are not aligned)

Is there anything I can do for that???
Does the algorithm has "limits" on how much dis-aligned can the original
images be?

I would appreciate it if you can answer some (or all;-) ), of these
questions.
Regards,
Despina

___________________________________________________________________
Electronic mail messages entering and leaving Arup business
systems are scanned for acceptability of content and viruses.

Hi all,

I am a student and I am new to hdri generation or processing so I need your help:)

I am trying to use hdrgen (for linux) for generating an hdr image from a series of photographs with different exposure each, of a static scene .
I faced some problems and I have some questions for the whole process..

1) First of all, I would like to know which is the best way to take photographs with different exposure.
If I understand correctly there are two ways. Either by changing F-number and let fixed the shutter speed or either by varying shutter speed and let the f-number fixed.
I would like to know witch of two ways is the best?

Depends on the camera you use as to which method is easiest. I use a Canon 300D digital SLR with Auto Exposure Bracketing (AEB). I set a fixed aperture and shoot a sequence of three images, one at the base shutter speed, one at the shutter speed which produces +1 stop of exposure and one that produces an image with -1 stop exposure. I repeat this process to get the range I'm looking for (say, 7 or 8 stops worth.) I use overlapping exposures so that I can compare the lighting conditions in identical exposures to make sure that the light is not changing radically throughout the picture taking process.

I use a fixed aperture so that the Depth of Field does not change throughout the image sequence. I shoot "mirror ball" images and choose an aperture and focal length that produce sharp focus of the ball and very blurred focus on the distant background so that the edges of the mirror ball are easy to see and define.

For example:

I would shoot a mirror ball on a stand using my camera fixed to a tripod. I use the timer on my camera to expose the 3 image AEB sequence so that the image registration is as close as possible (i.e., i don't touch the shutter release, to minimize camera shake) and also so that i do not appear in the images. I set the desired aperture, say f/8 and let the metered exposure tell me which shutter speed "correctly" exposes the FIRST base image - here let's say it is 1/60 second,

I would then set the AEB for +/- 1 stop, the aperture to f/8 and shoot the following sequence:

1) FIRST base exposure = @ 1/60, -1 = @1/125 (faster shutter, less light), +1 = @ 1/30 (slower shutter, more light)

then i would manually reset the shutter speed to 1/15 and shoot
2) NEXT base exposure = @ 1/15, -1 = @1/30, +1 = @ 1/8

then set the shutter speed manually to 1/250
3) NEXT base exposure = @ 1/250, -1 = @1/500, +1 =@1/125

etc. to cover the dynamic range you want to expose. Each image is 1 stop away from its neighbor. In this example I have:

MOST EXPOSURE < 1/8, 1/15, 1/30, 1/60, 1/125, 1/250, 1/500 > LEAST EXPOSURE

seven exposures worth of images, which may be an acceptable dynamic range for certain lighting conditions.

Notice the "overlap" at 1/30 and 1/125 - I would compare the two images generated at identical exposures to make sure they are not drastically different - if they are, that means the lighting is changing and the exposures likely are not really 1 stop apart.

2) Are "Lens Aperture", "F-number" and "F-stop" same thing?
Are "exposure time" and "shutter speed" same thing?
I know that these questions may seem to be "stupid" to most of you.. but as I said I am not familiar at all, with all these ..

Do a google on these terms or see one of the many sites about basic photographic principles. These terms are not identical but in general describe the parameters one may vary to determine how much light hits the film or sensor (exposure). A "stop" is essentially an increment of exposure, with successive stops being related by half or twice as much light as its lower or higher neighbor.

ยทยทยท

On Jan 6, 2005, at 12:07 PM, Despina Michael wrote:

The following questions are for hdrgen software..
I used it with different combinations of image...
Some times the hdr image is generated but it's not clear at all and "scene" are not aligned... and some other times the hdr image is not generated at all..

3) Where is the fault
when there is a warning: Trouble finding HDR patches***** ?
when ther is a warning: Poor convergence for order 1 fit? (is 1 or any other number X appear there refers to a problem with the Xth image in command arguments?)

4) Where is the fault when there is an error:
"Cannot solve for response function" ?
Is that because it can not generate file for response function of camera, with specific series of photographs?

5) One problem of resulting hdr images (in cases that there was a generation of image)
was that they were somehow "green". What can cause that?

6) My photographs are not perfectly aligned.
As I understood, the hdrgen uses an algorithm to align the photographs.
But as I noticed when the uses of alighnment algorithm is enable (NOT use of -a option) the "alignment" getting worst..
and when I disabled it (Using -a option) the result is better.. but still no good (I guess because that original photographs are not aligned)

Is there anything I can do for that???
Does the algorithm has "limits" on how much dis-aligned can the original images be?

I would appreciate it if you can answer some (or all;-) ), of these questions.
Regards,
Despina

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Thank you for your answers. Both are really very useful!

. I am still a little bit confused about aperture...
Does big aperture number (big depth of field) mean that you "focus" (if i can use this term) on the whole scene?
and does small aperture (small debth of field) mean that you focus only on the foreground?

In my case I take photos of a mirrored ball in order to create a light probe.
What I really need to use for aperture? Small or big values as aperture number?

Thanks,
Despina

Thank you for your answers. Both are really very useful!

.. I am still a little bit confused about aperture...
Does big aperture number (big depth of field) mean that you "focus" (if
i can use this term) on the whole scene?

the opposite - the bigger the number (f 16), the smaller the aperture, and
yet,
the larger the depth of field

and does small aperture (small debth of field) mean that you focus only
on the foreground?
big aperture (f 2.8) = more light / less depth of field
small aperture (f 11) = less light / more depth of field

Look at this page - gives a decent demonstation of the relationships
http://www.photonhead.com/exposure/exposure.php

HTH
Rob

In my case I take photos of a mirrored ball in order to create a light
probe.
What I really need to use for aperture? Small or big values as aperture
number?

Thanks,
Despina
<<ATT476996.txt>>

ยทยทยท

-----Original Message-----
From: Despina Michael
To: Radiance general discussion
Sent: 1/6/2005 1:44 PM
Subject: Re: [Radiance-general] Problems using hdrgen for hdri generation

Hi Despina,

Thanks to the helpful folks on the mailing list, most of your questions have already been answered, but I'll try to add a few points...

From: "Despina Michael" <[email protected]>
Date: January 6, 2005 9:07:46 AM PST

1) First of all, I would like to know which is the best way to take photographs with different exposure.
If I understand correctly there are two ways. Either by changing F-number and let fixed the shutter speed or either by varying shutter speed and let the f-number fixed.
I would like to know witch of two ways is the best?

You should definitely vary the shutter speed (i.e., exposure time) rather than the aperture (i.e., f-number, f-stop). You should read the attached tips, taken from the quickstart_pf.txt file distributed with Photosphere from <www.anyhere.com>.

3) Where is the fault
when there is a warning: Trouble finding HDR patches***** ?
when ther is a warning: Poor convergence for order 1 fit? (is 1 or any other number X appear there refers to a problem with the Xth image in command arguments?)

Warnings are mostly there as a kind of excuse for when things don't turn out. If they turn out, then you shouldn't lose any sleep over them.

4) Where is the fault when there is an error:
"Cannot solve for response function" ?
Is that because it can not generate file for response function of camera, with specific series of photographs?

It's probably because the sequence didn't capture enough dynamic range, or there were no smooth gradients. The best strategy is to use a good sequence to get the camera response, then store it and reuse it via hdrgen's -r option. (See the related tips in the attachment.)

5) One problem of resulting hdr images (in cases that there was a generation of image)
was that they were somehow "green". What can cause that?

I don't know. I would have to see the source sequence, but there's probably not much I could do to fix it.

6) My photographs are not perfectly aligned.
As I understood, the hdrgen uses an algorithm to align the photographs.
But as I noticed when the uses of alighnment algorithm is enable (NOT use of -a option) the "alignment" getting worst..
and when I disabled it (Using -a option) the result is better.. but still no good (I guess because that original photographs are not aligned)

The automatic alignment algorithm is not fool-proof, but I don't know of one that is. It does not take care of rotation, and images that are very far out of alignment will not work, either. (The maximum computed shift between adjacent exposures is +/-64 pixels in X and Y.) The final solution is to use a tripod. Using a tripod AND performing automatic alignment usually gives the best results. I have good luck myself with auto-bracketed hand-held exposures, and practice does help. I still get bad sequences, though -- usually in portrait mode, as I have a hard time not leaning during the exposures....

-Greg

Tips on HDR image creation taken from Photosphere quickstart blurb:

12) To create a high dynamic-range image, you need to start with
a set of "bracketed" exposures of a static scene. It is best if
you take a series of 10 or so exposures of an interior scene looking
out a window and containing some large, smooth gradients both inside
and outside, to determine the camera's natural response function.
Be sure to fix the camera white balance so it doesn't change, and
use aperture-priority or manual exposure mode to ensure that only
the speed is changing from one exposure to the next. For calibration,
you should place your camera on a tripod, and use a small aperature
(high f-number) to minimize vignetting. Take your exposure series
starting from the longest shutter time and working to the shortest in
one-stop increments. Make sure the longest exposure is not all white
and the darkest exposure is not all black. Once you have created your
image series, load it into Photosphere directly -- DO NOT PROCESS THE
IMAGES WITH PHOTOSHOP or any other program. Select the thumbnails,
then go to the "File -> Make HDR..." menu. Check the box that says
"Save New Response", and click "OK". The HDR building process
should take a few minutes, and Photosphere will record the computed
response function for your camera into its preferences file, which
will save time and the risk of error in subsequent HDR images.
You will also have the option of setting an absolute calibration
for the camera if you have a measured luminance value in the scene.
This option is provided by the "Apply" button submenu when the
measured area is selected in the image. (Click and drag to select.)
Once an HDR image has been computed, it is stored as a temporary
file in 96-bit floating-point TIFF format. This file is quite
large, but the data will only be saved in this format if you
select maximum quality and save as TIFF. Otherwise, the 32-bit
LogLuv TIFF format will be preferred (or the 24-bit LogLuv format
if you set quality to minimum). You also have the option of saving
to the more common Radiance file format (a.k.a. HDR format), or
ILM's 48-bit OpenEXR format. If you choose not to save the
image in high dynamic-range, the tone-mapped display image can be
written out as a 24-bit TIFF or JPEG image.

1 Like

Greg Ward wrote:

It's probably because the sequence didn't capture enough dynamic range,
or there were no smooth gradients. The best strategy is to use a good
sequence to get the camera response, then store it and reuse it via
hdrgen's -r option. (See the related tips in the attachment.)

Hi Greg and everyone,

Greg, I've been meaning to ask you about this. I have been shopping for a
new digital camera lately, and one of the criteria I have is the ability
to easily capture workable HDR images. Now, at the Berkeley workshop you
demonstrated doing this wth your Olympus 3030, which has the ability to
take a 5-image autobracketed sequence, all separated by a stop. This
gives you the ability to shoot a rough HDR sequence, handheld. This is
what I was looking for, and I recall you saying that for these kind of
quickie HDR sequences one needed to make sure the midrange exposure was
separated by at least two stops from the endpoints. I assumed this was to
ensure capturing the "entire range" of a "typical scene" (we have
discussed the limitations of this method in capturing the immense dynamic
range of a scene with direct sun, etc).

Of course when one looks at the current crop of digicams, the field
quickly gets thinned out when applying this criteria. All too often a
nice camera (like the new Sony V3) is eliminated because its autobracket
function consists of a three shot maximum, separated by a single stop --
not enough. At work, we have a Canon Rebel, which also only does three
shots for an autobracket sequence, but can separate them by two stops a
piece.

So, I'm wondering. If I got a camera like the V3, which only goes one
stop to either side of the ideal exposure, but *first* use a tripod and
create an excelent HDR sequence as per your (excellent) quickstart guide,
and create and save a camera response curve for my V3 from that sequence,
would my crude handheld three-shot/1-stop HDR sequences be reasonably
accurate (assuming again, moderate dynamic range scenes)?

That was a long and winding sentence, but I hope you get the idea. (?)

P.S.
Just today I saw your camera for $225 on a closeout special at a store in
Manhattan that's not even known for low camera prices. Ahh, technology.

P.P.S.
I was in the aforementioned store looking to purchase a power supply for
my work computer because the cooling fan in its power supply rolled over
and died 30 minutes into my workday this morning, a morning when I was
just finally getting caught up. Ahh, technology. =8-)

- Rob Guglielmetti

Hi Rob,

Short answer to your long question: 3 exposures separated by 1 f-stop each is not enough for an HDR image, in my opinion.

I find that 2 stops on either side is often less than I need for outdoor shots that include bright clouds. Also, I wish the shortest exposure on my Olympus 4040 was faster than 1/800th sec., as that still leaves bright areas saturated. (I used to have an Olympus 3030 and it was the same.) The manufacturers do seem to be headed the wrong direction when it comes to HDR these days. I've also noticed that they like to add all kinds of auto exposure curves and crap like that which totally screws me up. To be honest, you're better off getting a C-3040 from some discount place than a newer, fancier cameras. That goes for SLRs as well. I honestly don't think they're worth the extra bucks, unless you happen to have a slew of lenses you have to justify owning.

-Greg

Greg, Rob thanks for your answers!

Look at this page - gives a decent demonstation of the relationships
http://www.photonhead.com/exposure/exposure.php

The link is exactly what I need!!

5) One problem of resulting hdr images (in cases that there was a
generation of image)
was that they were somehow "green". What can cause that?

I don't know. I would have to see the source sequence, but there's
probably not much I could do to fix it.

There is no point in to attach my current photos. From the information I learned from your replies, I understood that my photos are very bad...

I'll take new ones, taking in account all you said. Hopefully results will be better. If I found out what's the problem with "green" in generated hdri I will let you know!
Thanks again,
Despina

Hi all,

I am trying to calibrate camera response function.
(unfortunately withh no good results - i belief still problems with alignment although I used a tripod)

I would like to ask you if the scene in the photo
www2.cs.ucy.ac.cy/~cs99dm1/calib.zip
is appropriate for that purpose.

According to Photosphere quickstart... is an inside scene looking out a window.
Of course I took a series of photos of this scene with different exposures.

Thanks,
Despina

ยทยทยท

----- Original Message ----- From: "Greg Ward" <[email protected]>
To: "Radiance general discussion" <[email protected]>
Sent: Friday, January 07, 2005 2:53 AM
Subject: Re: [Radiance-general] Problems using hdrgen for hdri generation

Hi Despina,

Thanks to the helpful folks on the mailing list, most of your questions
have already been answered, but I'll try to add a few points...

From: "Despina Michael" <[email protected]>
Date: January 6, 2005 9:07:46 AM PST

1) First of all, I would like to know which is the best way to take photographs with different exposure.
If I understand correctly there are two ways. Either by changing F-number and let fixed the shutter speed or either by varying shutter speed and let the f-number fixed.
I would like to know witch of two ways is the best?

You should definitely vary the shutter speed (i.e., exposure time)
rather than the aperture (i.e., f-number, f-stop). You should read the
attached tips, taken from the quickstart_pf.txt file distributed with
Photosphere from <www.anyhere.com>.

3) Where is the fault
when there is a warning: Trouble finding HDR patches***** ?
when ther is a warning: Poor convergence for order 1 fit? (is 1 or any other number X appear there refers to a problem with the Xth image in command arguments?)

Warnings are mostly there as a kind of excuse for when things don't
turn out. If they turn out, then you shouldn't lose any sleep over
them.

4) Where is the fault when there is an error:
"Cannot solve for response function" ?
Is that because it can not generate file for response function of camera, with specific series of photographs?

It's probably because the sequence didn't capture enough dynamic range,
or there were no smooth gradients. The best strategy is to use a good
sequence to get the camera response, then store it and reuse it via
hdrgen's -r option. (See the related tips in the attachment.)

5) One problem of resulting hdr images (in cases that there was a generation of image)
was that they were somehow "green". What can cause that?

I don't know. I would have to see the source sequence, but there's
probably not much I could do to fix it.

6) My photographs are not perfectly aligned.
As I understood, the hdrgen uses an algorithm to align the photographs.
But as I noticed when the uses of alighnment algorithm is enable (NOT use of -a option) the "alignment" getting worst..
and when I disabled it (Using -a option) the result is better.. but still no good (I guess because that original photographs are not aligned)

The automatic alignment algorithm is not fool-proof, but I don't know
of one that is. It does not take care of rotation, and images that are
very far out of alignment will not work, either. (The maximum computed
shift between adjacent exposures is +/-64 pixels in X and Y.) The
final solution is to use a tripod. Using a tripod AND performing
automatic alignment usually gives the best results. I have good luck
myself with auto-bracketed hand-held exposures, and practice does help.
  I still get bad sequences, though -- usually in portrait mode, as I
have a hard time not leaning during the exposures....

-Greg

Tips on HDR image creation taken from Photosphere quickstart blurb:

12) To create a high dynamic-range image, you need to start with
a set of "bracketed" exposures of a static scene. It is best if
you take a series of 10 or so exposures of an interior scene looking
out a window and containing some large, smooth gradients both inside
and outside, to determine the camera's natural response function.
Be sure to fix the camera white balance so it doesn't change, and
use aperture-priority or manual exposure mode to ensure that only
the speed is changing from one exposure to the next. For calibration,
you should place your camera on a tripod, and use a small aperature
(high f-number) to minimize vignetting. Take your exposure series
starting from the longest shutter time and working to the shortest in
one-stop increments. Make sure the longest exposure is not all white
and the darkest exposure is not all black. Once you have created your
image series, load it into Photosphere directly -- DO NOT PROCESS THE
IMAGES WITH PHOTOSHOP or any other program. Select the thumbnails,
then go to the "File -> Make HDR..." menu. Check the box that says
"Save New Response", and click "OK". The HDR building process
should take a few minutes, and Photosphere will record the computed
response function for your camera into its preferences file, which
will save time and the risk of error in subsequent HDR images.
You will also have the option of setting an absolute calibration
for the camera if you have a measured luminance value in the scene.
This option is provided by the "Apply" button submenu when the
measured area is selected in the image. (Click and drag to select.)
Once an HDR image has been computed, it is stored as a temporary
file in 96-bit floating-point TIFF format. This file is quite
large, but the data will only be saved in this format if you
select maximum quality and save as TIFF. Otherwise, the 32-bit
LogLuv TIFF format will be preferred (or the 24-bit LogLuv format
if you set quality to minimum). You also have the option of saving
to the more common Radiance file format (a.k.a. HDR format), or
ILM's 48-bit OpenEXR format. If you choose not to save the
image in high dynamic-range, the tone-mapped display image can be
written out as a 24-bit TIFF or JPEG image.

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general