Oh, they are, very low (~2 nits in the HDR), I was just curious because
these corner areas are not truly in the vield of view that we're evaluating.
I ASSume evalglare only is looking at the hemisphere anyway. (?)
Having said all that, does anyone have a masking tip or something like that
to clean up those corners, just out of aesthetic curiosity?
···
On Mon, Sep 27, 2010 at 4:42 PM, Gregory J. Ward <[email protected]>wrote:
Hi Rob,
You need to check what the values of those border pixels are. Chances are,
they are quite low compared to the circular image, and you are only seeing
them because of the tone-mapping compression going on in Photosphere,
assuming that's what you're using.
Best,
-Greg
> From: Rob Guglielmetti <[email protected]>
> Date: September 27, 2010 3:04:34 PM PDT
>
> Apologies for the cross posting, just trying to cover the bases.
>
> I have been experimenting with a new camera and Sigma 8mm fisheye lens,
creating HDR images for input to Jan Wienold's/Fraunhofer's evalglare
program. On the really long exposures, you can actually see the back end of
the lens and I guess some of the internals of the camera body itself. While
this is exceedingly cool/interesting, I wonder if this is impacting the
validity of the HDRs. When I create a Radiance HDR image (-vth) I get these
nice round images with totally black corners. With the camera, I end up with
a rectangular image and as I said some luminous pixels on the long
exposures. Is this a problem, and how do folks deal with this in practice?
Even if it's not a problem from an accuracy standpoint, aesthetically it's
nice to produce photos that look like the Radiance fisheye output.
>
> Thanks...
>
> ================
> Rob Guglielmetti
>
_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri
--
Rob Guglielmetti
Try:
pcomb -e 's(x):x*x;m=if(xmax*ymax/4-s(x-xmax/2)-s(y-ymax/2),1,0);ro=m*ri(1);go=m*gi(1);bo=m*bi(1)' input.hdr > output.hdr
-Greg
···
From: Rob Guglielmetti <[email protected]>
Date: September 27, 2010 4:00:11 PM PDT
Oh, they are, very low (~2 nits in the HDR), I was just curious because these corner areas are not truly in the vield of view that we're evaluating. I ASSume evalglare only is looking at the hemisphere anyway. (?)
Having said all that, does anyone have a masking tip or something like that to clean up those corners, just out of aesthetic curiosity?
On Mon, Sep 27, 2010 at 4:42 PM, Gregory J. Ward <[email protected]> wrote:
Hi Rob,
You need to check what the values of those border pixels are. Chances are, they are quite low compared to the circular image, and you are only seeing them because of the tone-mapping compression going on in Photosphere, assuming that's what you're using.
Best,
-Greg
Thanks Greg, worked a treat.
- Rob
···
On Mon, Sep 27, 2010 at 5:35 PM, Gregory J. Ward <[email protected]>wrote:
Try:
pcomb -e
's(x):x*x;m=if(xmax*ymax/4-s(x-xmax/2)-s(y-ymax/2),1,0);ro=m*ri(1);go=m*gi(1);bo=m*bi(1)'
input.hdr > output.hdr
-Greg
> From: Rob Guglielmetti <[email protected]>
> Date: September 27, 2010 4:00:11 PM PDT
>
> Oh, they are, very low (~2 nits in the HDR), I was just curious because
these corner areas are not truly in the vield of view that we're evaluating.
I ASSume evalglare only is looking at the hemisphere anyway. (?)
>
> Having said all that, does anyone have a masking tip or something like
that to clean up those corners, just out of aesthetic curiosity?
>
> On Mon, Sep 27, 2010 at 4:42 PM, Gregory J. Ward <[email protected]> > wrote:
> Hi Rob,
>
> You need to check what the values of those border pixels are. Chances
are, they are quite low compared to the circular image, and you are only
seeing them because of the tone-mapping compression going on in Photosphere,
assuming that's what you're using.
>
> Best,
> -Greg
_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri
--
Rob Guglielmetti
Hi Greg,
I am in the midst of trying to get Evalglare to accurately process HDR
images. In order to account for light-fall from a Canon + fish-eye
generated HDR, I have tried to multiply by the original HDR image with a
correction factor using the pcomb command. However, the resulting image
does not maintain similar luminance values as the original (as it should
particularly for the center part of the image that is only multiplied by a
factor of 1).
I am using the following command line and would appreciate any guidance on
why the luminance values are so affected:
pcomb -e "ro=ri(1)*ri(2);go=gi(1)*gi(2);bo=bi(1)*bi(2);" vignette_283.pic
Chauhaus3cr.hdr > Chauhaus3Bvg.hdr
My initial attempt actually used the -o function to normalize the values of
the image before processing, but the resulting vignetted image was very dark
and the luminance values had drastically dropped -- pcomb -e
"ro=ri(1)*ri(2);go=gi(1)*gi(2);bo=bi(1)*bi(2);" -o vignette_283.pic
Chauhaus3cr.hdr > Chauhaus3vg.hdr
As an aside to something Rob mentioned earlier, I do not believe that right
now Evalglare automatically only judges the circular view of the fish-eye
HDR while being processed. In order to get accurate results, the
rectangular images should be cropped (using pcompos) and also the view type
should be verified before processing. I have discovered that several of my
Photopshere generated HDRs are being judged as a perspective view rather
than angular fish-eye as I had assumed it would (vtv instead of vta) and
that I had to manually adjust this setting before getting accurate and
meaningful Evalglare results. (Greg -- is there anyway to specify the lens
type in Photosphere before compiling the HDR so that it follows through from
the beginning?)
Best,
Rashida
···
--
Rashida Mogri | LEED AP
Harvard Graduate School of Design
MDesS, 2011 | Sustainable Design
On Mon, Sep 27, 2010 at 7:35 PM, Gregory J. Ward <[email protected]>wrote:
Try:
pcomb -e
's(x):x*x;m=if(xmax*ymax/4-s(x-xmax/2)-s(y-ymax/2),1,0);ro=m*ri(1);go=m*gi(1);bo=m*bi(1)'
input.hdr > output.hdr
-Greg
> From: Rob Guglielmetti <[email protected]>
> Date: September 27, 2010 4:00:11 PM PDT
>
> Oh, they are, very low (~2 nits in the HDR), I was just curious because
these corner areas are not truly in the vield of view that we're evaluating.
I ASSume evalglare only is looking at the hemisphere anyway. (?)
>
> Having said all that, does anyone have a masking tip or something like
that to clean up those corners, just out of aesthetic curiosity?
>
> On Mon, Sep 27, 2010 at 4:42 PM, Gregory J. Ward <[email protected]> > wrote:
> Hi Rob,
>
> You need to check what the values of those border pixels are. Chances
are, they are quite low compared to the circular image, and you are only
seeing them because of the tone-mapping compression going on in Photosphere,
assuming that's what you're using.
>
> Best,
> -Greg
_______________________________________________
HDRI mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/hdri
Just realized, you should include a "-o" somewhere on the pcomb command line to maintain absolute values, in case the input image has already passed through pfilt or the like with an exposure adjustment.
-Greg
···
From: Rob Guglielmetti <[email protected]>
Date: September 28, 2010 8:21:16 AM PDT
Thanks Greg, worked a treat.
- Rob
On Mon, Sep 27, 2010 at 5:35 PM, Gregory J. Ward <[email protected]> wrote:
Try:
pcomb -e 's(x):x*x;m=if(xmax*ymax/4-s(x-xmax/2)-s(y-ymax/2),1,0);ro=m*ri(1);go=m*gi(1);bo=m*bi(1)' input.hdr > output.hdr
-Greg
Some quick responses:
1) You do need to use the -o option in your pcomb command, but it only applies to the immediately following input file. Therefore, you need to put the "-o" between your two file names, or change their order to keep the absolute values. Displaying the output with ximage, you can use the "-e auto" option to bring the values in the the correct range for display (i.e., tone-map the result).
2) There is no easy way for Photosphere to figure out whether a fish-eye lens is being used. I would have to create a lens database similar to Photoshop CS5's and read the hidden maker tags in the Exif file, and still it would only work "sometimes." You are better off adding or correcting the VIEW= string as Jan suggests using the "vinfo" command in Radiance.
Cheers,
-Greg
···
From: Rashida Mogri <[email protected]>
Date: September 28, 2010 9:37:38 AM PDT
Hi Greg,
I am in the midst of trying to get Evalglare to accurately process HDR images. In order to account for light-fall from a Canon + fish-eye generated HDR, I have tried to multiply by the original HDR image with a correction factor using the pcomb command. However, the resulting image does not maintain similar luminance values as the original (as it should particularly for the center part of the image that is only multiplied by a factor of 1).
I am using the following command line and would appreciate any guidance on why the luminance values are so affected:
pcomb -e "ro=ri(1)*ri(2);go=gi(1)*gi(2);bo=bi(1)*bi(2);" vignette_283.pic Chauhaus3cr.hdr > Chauhaus3Bvg.hdr
My initial attempt actually used the -o function to normalize the values of the image before processing, but the resulting vignetted image was very dark and the luminance values had drastically dropped -- pcomb -e "ro=ri(1)*ri(2);go=gi(1)*gi(2);bo=bi(1)*bi(2);" -o vignette_283.pic Chauhaus3cr.hdr > Chauhaus3vg.hdr
As an aside to something Rob mentioned earlier, I do not believe that right now Evalglare automatically only judges the circular view of the fish-eye HDR while being processed. In order to get accurate results, the rectangular images should be cropped (using pcompos) and also the view type should be verified before processing. I have discovered that several of my Photopshere generated HDRs are being judged as a perspective view rather than angular fish-eye as I had assumed it would (vtv instead of vta) and that I had to manually adjust this setting before getting accurate and meaningful Evalglare results. (Greg -- is there anyway to specify the lens type in Photosphere before compiling the HDR so that it follows through from the beginning?)
Best,
Rashida
Apologies for the cross posting, just trying to cover the bases.
I have been experimenting with a new camera and Sigma 8mm fisheye lens,
creating HDR images for input to Jan Wienold's/Fraunhofer's evalglare
program. On the really long exposures, you can actually see the back end of
the lens and I guess some of the internals of the camera body itself. While
this is exceedingly cool/interesting, I wonder if this is impacting the
validity of the HDRs. When I create a Radiance HDR image (-vth) I get these
nice round images with totally black corners. With the camera, I end up with
a rectangular image and as I said some luminous pixels on the long
exposures. Is this a problem, and how do folks deal with this in practice?
Even if it's not a problem from an accuracy standpoint, aesthetically it's
nice to produce photos that look like the Radiance fisheye output.
Thanks...
···
================
Rob Guglielmetti
www.rumblestrip.org
Hi Rob,
in the current version of evalglare only angular views are supported!!
Don't use -vth. We will change that with the next release, but until
then the only supported view-type is -vta. Using -vth will return wrong
values!
By the way -usually most of the available fish-eye lenses are angular,
not hemispherical.
Another important issue: If you use programs which might change the view
(like pcompos) , the view string in the header is obsolete and the
standard view (-vtv, vh=45,vv=45) is taken by the radiance
routines(which we use in evalglare), which also results in completely
wrong results!!! The fatal thing is, that you still see the old view
string if you look at it when using getinfo, it is just a tab added to
"disable" the view string.
This caused a lot of trouble in the past and we will change now also the
command line options of evalglare, so that you have to provide the view
each time when you call evalglare. This will be also included within the
next release. Until then, be sure that image has the correct view
options without tab and -vta is used!
Jan
Rob Guglielmetti schrieb:
···
Apologies for the cross posting, just trying to cover the bases.
I have been experimenting with a new camera and Sigma 8mm fisheye
lens, creating HDR images for input to Jan Wienold's/Fraunhofer's
evalglare program. On the really long exposures, you can actually see
the back end of the lens and I guess some of the internals of the
camera body itself. While this is exceedingly cool/interesting, I
wonder if this is impacting the validity of the HDRs. When I create a
Radiance HDR image (-vth) I get these nice round images with totally
black corners. With the camera, I end up with a rectangular image and
as I said some luminous pixels on the long exposures. Is this a
problem, and how do folks deal with this in practice? Even if it's not
a problem from an accuracy standpoint, aesthetically it's nice to
produce photos that look like the Radiance fisheye output.
Thanks...
================
Rob Guglielmetti
www.rumblestrip.org <http://www.rumblestrip.org>
------------------------------------------------------------------------
_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general
Hi Jan, thanks for the clarifications, and sorry for the confusion. I meant
angular, but typed -vth. I do know that evalglare needs to be working on
angular fisheyes. Thanks for the heads up about the view headers, we will be
careful to give evalglare exactly what it needs to do the proper
evaluations...
- Rob
···
On Tue, Sep 28, 2010 at 6:27 AM, Jan Wienold <[email protected]>wrote:
Hi Rob,
in the current version of evalglare only angular views are supported!!
Don't use -vth. We will change that with the next release, but until then
the only supported view-type is -vta. Using -vth will return wrong values!
By the way -usually most of the available fish-eye lenses are angular, not
hemispherical.
Another important issue: If you use programs which might change the view
(like pcompos) , the view string in the header is obsolete and the standard
view (-vtv, vh=45,vv=45) is taken by the radiance routines(which we use in
evalglare), which also results in completely wrong results!!! The fatal
thing is, that you still see the old view string if you look at it when
using getinfo, it is just a tab added to "disable" the view string.
This caused a lot of trouble in the past and we will change now also the
command line options of evalglare, so that you have to provide the view each
time when you call evalglare. This will be also included within the next
release. Until then, be sure that image has the correct view options without
tab and -vta is used!
Jan
Rob Guglielmetti schrieb:
Apologies for the cross posting, just trying to cover the bases.
I have been experimenting with a new camera and Sigma 8mm fisheye lens,
creating HDR images for input to Jan Wienold's/Fraunhofer's evalglare
program. On the really long exposures, you can actually see the back end of
the lens and I guess some of the internals of the camera body itself. While
this is exceedingly cool/interesting, I wonder if this is impacting the
validity of the HDRs. When I create a Radiance HDR image (-vth) I get these
nice round images with totally black corners. With the camera, I end up with
a rectangular image and as I said some luminous pixels on the long
exposures. Is this a problem, and how do folks deal with this in practice?
Even if it's not a problem from an accuracy standpoint, aesthetically it's
nice to produce photos that look like the Radiance fisheye output.
Thanks...
================
Rob Guglielmetti
www.rumblestrip.org
------------------------------
_______________________________________________
Radiance-general mailing [email protected]://www.radiance-online.org/mailman/listinfo/radiance-general
_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general
--
Rob Guglielmetti