Findglare and HDR photographs

Dear list,

I've been exerimenting with Radiance's findglare and glarendx, trying
to get UGRs from photographic HDRs. I'm using the Sigma 4.5mm on a
D200, which seems to be quite a popular choice amongst you.

Unlike the FC-E8/Coolpix combo, which produces an equidistant
projection (-vta), the Sigma 4.5mm results in a 180deg equisolidangle
view. I gather from this post to the rad-gen list:
http://www.radiance-online.org/pipermail/radiance-general/2010-April/006709.html
that the NYT cart was based on a Sigma lens (4.5mm ?), operated at
F5.6. The code snippet in that post suggests that the HDRs were
vignetting corrected.

An overall calibration of the image luminance can be carried out (I
think) by measuring the vertical illuminance at the lens when the
exposure-bracketed sequence is taken, and then running findglare and
glarendx -t ver_illu on the HDR, which should give a calibration
factor that can then be used to fiddle with the EXPOSURE= line. This
is probably more accurate than calibrating against spot meter
readings. So far, so good.

What I don't seem to be able to find in the googleable literature, nor
in the HDR book, is any words of wisdom regarding the impact of the
lens projection on glare metrics. Radiance doesn't have an
equisolidangle view type, so using pinterp as detailed in this post:
http://www.radiance-online.org/pipermail/radiance-general/2011-August/008141.html
is not an option.

It might be possible to utilse ImageMagick to re-project the JPGs
prior to running hdrgen, but I'd rather not go there.

The deviation between equisolidangle and -vta is most noticeable for
high off-axis angles, which is also where glare sources have less of
an impact (Guth position index). I'm therefore wondering whether
people just tend to go with the vignetting-corrected and
luminance-calibrated HDR without worrying too much about re-projecting
the fisheye. The same question would apply to evalglare's DGP rating,
which relies on the HDR coming in -vta. Has anybody looked into this?

Cheers

Axel

Hi Axel,

This is interesting. I hadn't realized that the Sigma lens used something other than the equidistant projection. I thought this was the more normal type, and our casual measurements seemed to indicate that it followed this projection. Perhaps I wasn't as careful as I ought to have been with my observations. The horizon on the Sigma is enough of a mess that it's difficult to gauge things accurately at the outer rim of the circular image.

Looking at the ever-helpful Wikipedia page on the topic <http://en.wikipedia.org/wiki/Fisheye_lens#Mapping_function>, I see that the equidistant and equisolid-angle projections are fairly similar except at the outer reaches (near +/-90°). Of course, errors could still be large assuming the wrong projection if your only light sources is out in that part of the view.

In lieu of reprojecting the image with pinterp, which is as you say unsupported, it is possibly to apply a correction to the image values to account for the difference in solid angle at each pixel. Given that the solid angle of the equisolid-angle projection is the same for each pixel, we really only need the solid angle for the equidistant projection. This can be computed with a simple expression, which is sin(theta)/theta.

Does this help?
-Greg

···

From: Axel Jacobs <[email protected]>
Date: February 3, 2012 6:27:59 AM PST

Dear list,

I've been exerimenting with Radiance's findglare and glarendx, trying
to get UGRs from photographic HDRs. I'm using the Sigma 4.5mm on a
D200, which seems to be quite a popular choice amongst you.

Unlike the FC-E8/Coolpix combo, which produces an equidistant
projection (-vta), the Sigma 4.5mm results in a 180deg equisolidangle
view. I gather from this post to the rad-gen list:
http://www.radiance-online.org/pipermail/radiance-general/2010-April/006709.html
that the NYT cart was based on a Sigma lens (4.5mm ?), operated at
F5.6. The code snippet in that post suggests that the HDRs were
vignetting corrected.

An overall calibration of the image luminance can be carried out (I
think) by measuring the vertical illuminance at the lens when the
exposure-bracketed sequence is taken, and then running findglare and
glarendx -t ver_illu on the HDR, which should give a calibration
factor that can then be used to fiddle with the EXPOSURE= line. This
is probably more accurate than calibrating against spot meter
readings. So far, so good.

What I don't seem to be able to find in the googleable literature, nor
in the HDR book, is any words of wisdom regarding the impact of the
lens projection on glare metrics. Radiance doesn't have an
equisolidangle view type, so using pinterp as detailed in this post:
http://www.radiance-online.org/pipermail/radiance-general/2011-August/008141.html
is not an option.

It might be possible to utilse ImageMagick to re-project the JPGs
prior to running hdrgen, but I'd rather not go there.

The deviation between equisolidangle and -vta is most noticeable for
high off-axis angles, which is also where glare sources have less of
an impact (Guth position index). I'm therefore wondering whether
people just tend to go with the vignetting-corrected and
luminance-calibrated HDR without worrying too much about re-projecting
the fisheye. The same question would apply to evalglare's DGP rating,
which relies on the HDR coming in -vta. Has anybody looked into this?

Cheers

Axel

Hi Greg,

This is interesting. I hadn't realized that the Sigma lens used something other than the equidistant projection. I thought this was the more normal type, and our casual measurements seemed to indicate that it followed this projection. Perhaps I wasn't as careful as I ought to have been with my observations. The horizon on the Sigma is enough of a mess that it's difficult to gauge things accurately at the outer rim of the circular image.

True, the horizon is a bit messy, but A LOT cleaner than on the Nikon FC-E8.

Looking at the ever-helpful Wikipedia page on the topic <http://en.wikipedia.org/wiki/Fisheye_lens#Mapping_function>, I see that the equidistant and equisolid-angle projections are fairly similar except at the outer reaches (near +/-90°). Of course, errors could still be large assuming the wrong projection if your only light sources is out in that part of the view.

Here's another good page with a helpful diagram (scroll down) to 'fisheye':
http://www.bobatkins.com/photography/technical/field_of_view.html

In lieu of reprojecting the image with pinterp, which is as you say unsupported, it is possibly to apply a correction to the image values to account for the difference in solid angle at each pixel. Given that the solid angle of the equisolid-angle projection is the same for each pixel, we really only need the solid angle for the equidistant projection. This can be computed with a simple expression, which is sin(theta)/theta.

You see, this is where I get a little lost between 'projection',
'distortion', and 'vignetting'. They are, of course, different things.
Something like (don't quote me on it):
- projection: that would be 'equidistant' or 'equisolidangle'
- distortion: the deviation from the ideal 'projection'. Think pin
cushion or barrel
- vignetting: drop-off in image brightness towards the image horizon

Would not the sin(theta)/theta correction account only for the pixel
brightness? In other words: the pixel 'location' on the photographic
plate/CCD chip would still be wrong, so that the Guth index in the UGR
formula would be off?

I can imagine that one could create a pcomb cal file that builds up a
new image from the corrected radial distance of the pixel in the
source image. If this get too aliased (blocky), one could average over
the nearby pixels using the optional x,y offset that pcomb provides,
e.g. (spaces added for clarity):
ro=.5*ri(1) + .5*( ri(1,-1,0)+ri(1,1,0)+ri(1,0,-1)+ri(1,0,1) )/4 etc
which is effectively a box filter. Not tested! Don't try this at home!
One could fiddle with the pixel/off-pixel multipliers (both .5 in this
case) to see what would look best. I'm not sure how floating point
pixel coordiantes are handled by pcomb. Are they just rounded off?

This approach would, of course, smudge out the luminance of the new,
constructed pixel to a certain extend, but considering that the lens
rensponse function does this anyhow, it's probably a small price to
pay for a smooth-as-a-baby's-bottom image.

This would correct the pixel 'position', but where I get confused with
all this is this: How would one then have to correct the pixel
brightness (vignetting?) to account for this re-projection of pixel
locations, while still maintaining photometric integrity of the image
as a whole (vertical illuminance, say... Or UGR)?

···

Does this help?
-Greg

From: Axel Jacobs <[email protected]>
Date: February 3, 2012 6:27:59 AM PST

Dear list,

I've been exerimenting with Radiance's findglare and glarendx, trying
to get UGRs from photographic HDRs. I'm using the Sigma 4.5mm on a
D200, which seems to be quite a popular choice amongst you.

Unlike the FC-E8/Coolpix combo, which produces an equidistant
projection (-vta), the Sigma 4.5mm results in a 180deg equisolidangle
view. I gather from this post to the rad-gen list:
http://www.radiance-online.org/pipermail/radiance-general/2010-April/006709.html
that the NYT cart was based on a Sigma lens (4.5mm ?), operated at
F5.6. The code snippet in that post suggests that the HDRs were
vignetting corrected.

An overall calibration of the image luminance can be carried out (I
think) by measuring the vertical illuminance at the lens when the
exposure-bracketed sequence is taken, and then running findglare and
glarendx -t ver_illu on the HDR, which should give a calibration
factor that can then be used to fiddle with the EXPOSURE= line. This
is probably more accurate than calibrating against spot meter
readings. So far, so good.

What I don't seem to be able to find in the googleable literature, nor
in the HDR book, is any words of wisdom regarding the impact of the
lens projection on glare metrics. Radiance doesn't have an
equisolidangle view type, so using pinterp as detailed in this post:
http://www.radiance-online.org/pipermail/radiance-general/2011-August/008141.html
is not an option.

It might be possible to utilse ImageMagick to re-project the JPGs
prior to running hdrgen, but I'd rather not go there.

The deviation between equisolidangle and -vta is most noticeable for
high off-axis angles, which is also where glare sources have less of
an impact (Guth position index). I'm therefore wondering whether
people just tend to go with the vignetting-corrected and
luminance-calibrated HDR without worrying too much about re-projecting
the fisheye. The same question would apply to evalglare's DGP rating,
which relies on the HDR coming in -vta. Has anybody looked into this?

Cheers

Axel

_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri

Hi Axel,

In lieu of reprojecting the image with pinterp, which is as you say unsupported, it is possibly to apply a correction to the image values to account for the difference in solid angle at each pixel. Given that the solid angle of the equisolid-angle projection is the same for each pixel, we really only need the solid angle for the equidistant projection. This can be computed with a simple expression, which is sin(theta)/theta.

You see, this is where I get a little lost between 'projection',
'distortion', and 'vignetting'. They are, of course, different things.
Something like (don't quote me on it):
- projection: that would be 'equidistant' or 'equisolidangle'
- distortion: the deviation from the ideal 'projection'. Think pin
cushion or barrel
- vignetting: drop-off in image brightness towards the image horizon

Would not the sin(theta)/theta correction account only for the pixel
brightness? In other words: the pixel 'location' on the photographic
plate/CCD chip would still be wrong, so that the Guth index in the UGR
formula would be off?

The brightness of the pixel will be correct once you've accounted for vignetting. The sin(theta)/theta multiplier is used to reweight the pixel so that it contributes the same amount to luminous flux (luminance times steradians) everywhere in the image. This is what you need for the glare source calculations, I think. Otherwise, you would be treating the light sources near the view horizon as bigger than they should be.

I can imagine that one could create a pcomb cal file that builds up a
new image from the corrected radial distance of the pixel in the
source image. If this get too aliased (blocky), one could average over
the nearby pixels using the optional x,y offset that pcomb provides,
e.g. (spaces added for clarity):
ro=.5*ri(1) + .5*( ri(1,-1,0)+ri(1,1,0)+ri(1,0,-1)+ri(1,0,1) )/4 etc
which is effectively a box filter. Not tested! Don't try this at home!
One could fiddle with the pixel/off-pixel multipliers (both .5 in this
case) to see what would look best. I'm not sure how floating point
pixel coordiantes are handled by pcomb. Are they just rounded off?

I suppose this would work, but it seems more work than is necessary. Yes, floating point pixel coordinates are rounded off.

This approach would, of course, smudge out the luminance of the new,
constructed pixel to a certain extend, but considering that the lens
rensponse function does this anyhow, it's probably a small price to
pay for a smooth-as-a-baby's-bottom image.

This would correct the pixel 'position', but where I get confused with
all this is this: How would one then have to correct the pixel
brightness (vignetting?) to account for this re-projection of pixel
locations, while still maintaining photometric integrity of the image
as a whole (vertical illuminance, say... Or UGR)?

I guess you'd want to correct for vignetting in whatever projection you measured your vignetting error, probably before getting to this reprojection stage. I still think the easiest thing is to correct for vignetting and pixel size at the same time, then go ahead and treat the image as if its an equidistant projection. In other words, divide by the solid angle ratio (i.e., multiply each pixel by theta/sin(theta)) and then pass it through findglare as you would normally.

Make sense?
-Greg

Thanks for this pat on the back, Greg.

I guess you'd want to correct for vignetting in whatever projection you measured your vignetting error, probably before getting to this reprojection stage. I still think the easiest thing is to correct for vignetting and pixel size at the same time, then go ahead and treat the image as if its an equidistant projection. In other words, divide by the solid angle ratio (i.e., multiply each pixel by theta/sin(theta)) and then pass it through findglare as you would normally.

Will do. I shall report back to this thread. One last sub-question, if I may:

An overall calibration of the image luminance can be carried out (I
think) by measuring the vertical illuminance at the lens when the
exposure-bracketed sequence is taken, and then running findglare and
glarendx -t ver_illu on the HDR, which should give a calibration
factor that can then be used to fiddle with the EXPOSURE= line. This
is probably more accurate than calibrating against spot meter
readings. So far, so good.

Does this make sense? Is this what the NYT trolley did? Spot luminance
meter calibrations are a bit messy, because it's very difficult to
match the target circle of the luminance meter against a pixel value
('L' in ximage), or against a box average
('drag-your-mouse-in-ximage', then hit 'L')?

The problem with lux meters, on the other hand, is that cosine
correction for near-the-horizon-angles is hopeless, even for 'proper'
illuminance meters. Some manufacturers will not even give you a number
for angles > 80deg, which is a problem almost identical to the HDR
projection function. Perfect alignment appears to be critical for
light sources close to the visible horizon of the meter, irrespective
of the cos weighting.

Cheers and have a pleasant weekend

Axel

Hi Axel,

From: Axel Jacobs <[email protected]>
Date: February 3, 2012 1:21:43 PM PST

...

Will do. I shall report back to this thread. One last sub-question, if I may:

An overall calibration of the image luminance can be carried out (I
think) by measuring the vertical illuminance at the lens when the
exposure-bracketed sequence is taken, and then running findglare and
glarendx -t ver_illu on the HDR, which should give a calibration
factor that can then be used to fiddle with the EXPOSURE= line. This
is probably more accurate than calibrating against spot meter
readings. So far, so good.

Does this make sense? Is this what the NYT trolley did? Spot luminance
meter calibrations are a bit messy, because it's very difficult to
match the target circle of the luminance meter against a pixel value
('L' in ximage), or against a box average
('drag-your-mouse-in-ximage', then hit 'L')?

Yes, but you need to do it after all the spatial corrections have been applied (solid angle + vignetting).

The problem with lux meters, on the other hand, is that cosine
correction for near-the-horizon-angles is hopeless, even for 'proper'
illuminance meters. Some manufacturers will not even give you a number
for angles > 80deg, which is a problem almost identical to the HDR
projection function. Perfect alignment appears to be critical for
light sources close to the visible horizon of the meter, irrespective
of the cos weighting.

I guess this doesn't surprise me. Having the bright areas in front rather than off to the side should improve things. You may be better off using a patch calibration, which is what they did at LBNL in their advance glazings test facility. I don't recall what we did on the NYT trolly as far as calibration goes.

Cheers,
-Greg

Thanks, Greg

Enjoy your well-deserved weekend.

Over and out.

Axel

···

On 3 February 2012 21:29, Gregory J. Ward <[email protected]> wrote:

Hi Axel,

From: Axel Jacobs <[email protected]>
Date: February 3, 2012 1:21:43 PM PST

...

Will do. I shall report back to this thread. One last sub-question, if I may:

An overall calibration of the image luminance can be carried out (I
think) by measuring the vertical illuminance at the lens when the
exposure-bracketed sequence is taken, and then running findglare and
glarendx -t ver_illu on the HDR, which should give a calibration
factor that can then be used to fiddle with the EXPOSURE= line. This
is probably more accurate than calibrating against spot meter
readings. So far, so good.

Does this make sense? Is this what the NYT trolley did? Spot luminance
meter calibrations are a bit messy, because it's very difficult to
match the target circle of the luminance meter against a pixel value
('L' in ximage), or against a box average
('drag-your-mouse-in-ximage', then hit 'L')?

Yes, but you need to do it after all the spatial corrections have been applied (solid angle + vignetting).

The problem with lux meters, on the other hand, is that cosine
correction for near-the-horizon-angles is hopeless, even for 'proper'
illuminance meters. Some manufacturers will not even give you a number
for angles > 80deg, which is a problem almost identical to the HDR
projection function. Perfect alignment appears to be critical for
light sources close to the visible horizon of the meter, irrespective
of the cos weighting.

I guess this doesn't surprise me. Having the bright areas in front rather than off to the side should improve things. You may be better off using a patch calibration, which is what they did at LBNL in their advance glazings test facility. I don't recall what we did on the NYT trolly as far as calibration goes.

Cheers,
-Greg
_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri