you can use evalglare also without a fish-eye lens, but you have to be aware of following:
If you want to use the DGP, then you need to have a vertical illuminance value. Per default, this value is calculated by evalglare by the use of the image. But this can be done only, if you have a full fish-eye view. If you don't have the fish-eye view, you can provide this value by the -i option (see the comment of Alstan). Then, you should measure the value by a luxmeter. If you haven't measured the illuminance, then it is problematic. The smaller you view angle is the higher is the error you make.
Please be aware, that recent studies showed, that the glare caused by daylight can be described best, if you take into account the vertical illuminance. So if you don't measure it (or measure it only partly by restricted view), you might get far away from a real glare evaluation. In other words: If you wanna do a daylight glare evaluation, you have to measure the vertical illuminance (either by a full fish eye or by a luxmeter).
Principally, evalglare does not depend on the view type. So if you have a smaller view (and you catch with your cam potential glare sources) + an additional luxmeter you can do a reliable daylight glare evaluation.
Up to the current version(0.9f), some view types were not supported (like -vth), but this will change :
There will be a new version (1.0) released NEXT week, which will have following new features (I will announce it, as soon as it is on the server) :
-all current view types will be supported (except parallel view) - that means also -vth !
-valid view is checked (many problems occurred, because users treat the image with programs like pcompos before putting into evalglare. But then, the view got lost and a wrong view leads to totally wrong results! Now this will be handled by the new evalglare )
-view options can be provided also per command line option
-disability glare is also calculated
-cut-out the "Guth" visual field
J. Alstan Jakubiec schrieb:
You can use evalglare with a regular hdr image as far as I understand it. Jan can correct me if I'm wrong :). As Thomas notes, it is best to have the full hemisphere image, but if you don't have such an image, you can still get pretty good results with a couple of steps:
1. Make sure your hdr is calibrated to photometric units of cd/m2. Doyle and Reinhart also have a pretty reasonable tutorial on doing this using Photosphere: http://www.gsd.harvard.edu/research/gsdsquare/Publications/HDR_II_Photosphere.pdf
2. You'll want to make sure that in the header to your HDR, it has a Radiance view type that is pretty close to the opening angle of the camera lens you are using. An example will be something like, "VIEW= -vtv -vh 60 -vv 40" which are some parameters I just pulled from a random rpict-generated image on my computer. In the ideal case, your view type would be the angular fisheye, "-vta -vh 180 -vv 180". I think that this is automatically taken care of to some degree by Photosphere, but its worth checking before using your images for analysis.
3. It is also useful to have an illuminance reading from the time the photo was taken. The DGP calculations rely on total vertical eye illuminance and contrast. Usually, the illumimance at the eye is calculated directly from the fisheye image, but when you don't have a view encompassing a hemisphere, you can provide the illuminance separately by calling evalglare as, "evalglare -i VerticalEyeIllumianance file.hdr."
On Mon, 06 Feb 2012 21:26:49 -0500, Thomas Bleicher > <firstname.lastname@example.org> wrote:
You can use the pinterp program to convert from one projection to another.
See the man page for an example or dig in the archives where this has been
discussed before. However, note that the glare equations are based on a
full hemispherical image. You will have to fill in the missing perimeter of
your converted fisheye image with values of the right brightness or your
glare evaluation will be off.
On Mon, Feb 6, 2012 at 9:15 PM, Ery Djunaedy >> <email@example.com>wrote:
I have a bunch of HDR images generated from photographs taken with regular
-- non-fisheye -- lens. Is there a way that we can use these images for
glare analysis in evalglare? The evalglare description says that it only
accepts fisheye images. There is a tutorial from Harvard GSD (Doyle and
Reihart) that shows a non-fisheye photo in evalglare:
I am wondering whether this can be done at all.
Radiance-general mailing list
Radiance-general mailing list
Dr.-Ing. Jan Wienold
Head of Team Passive Systems and Daylighting
Fraunhofer-Institut für Solare Energiesysteme
Thermal Systems and Buildings
Heidenhofstr. 2, 79110 Freiburg, Germany
Phone: +49(0)761 4588 5133 Fax:+49(0)761 4588 9133
In office: Mo,Tue: 8:30-18:00