I have a question about the option “-A” of the evalglare. What’s the format of “maskfile” and how to produce this file? If I need to get the luminance of some specific area like the window, how to make the rest area black?
I produced a maskfile using the software Photoshop as the following:
I don’t know if this way is accurate and I hope someone can share several other better methods. Thanks a lot!
For what reason do you want to apply the masking?
If you want to “cut” your image (because the CCD array is larger than the diameter of the projection of your fish-eye lens → you have a black border), then you should use pcompos for that. (take care of the header after, because exposure and view string get invalid. How to deal with that is described in the tutorial paper).
The masking function is evalglare was included to enable a zonal evaluation (e.g. to calculate an average window luminance, or the median luminance value of a zone). It should not affect the glare source detection itself (first glare source detection, then masking). That means the glare-metric calculation is still on the full image, it just adds a statistical evaluation of the non-masked area.
At the radiance workshop in Padova 2016 I showed how to apply it:
Thanks for your reply!
I just wanna get the luminance of some specific areas like window. And I know that “evalglare” has provide several zonal evaluation. However, these zones are regular and limited. The reason for using the masking is that some analyzed area is irregular and user-defined.
As for the file, it helps a lot cause the method of making a masking file is similar to mine and it supplies more details. Moreover, I have found an open-source software “hdrscope” which can process and analyze the HDR images.
yes, this is a nice tool. I’m just not sure if they ship an outdated evalglare version - make sure you replace it with the current one if so. The old version has some bugs in it and lesser safety tests (and is slower). The new version should be backwards-compatible.
Just two more notes: As far as I know the process there misses the projection-method adaptation, this you have to do separately.
As far as I know the average calculation of an area in that tool is based on average pixel-luminance and does not consider the difference in solid angle per pixel, which is necessary to conserve energy. For an angular projection method you can have a 20% deviation of the solid angle between pixels in the image - which means in worst case you end of with that deviation in your average luminance calculation. For typical images the error might be much smaller, but already having a vertical window where sou see partly a dark ground and in the upper part a bright sky, this can lead to a 5-10% deviation, which is an unnecessary add-on to the uncertainty that exists anyhow. So for a quick check that simplification is ok, but I would not use it in a scientific publication.