Test: HDR from Raspberry cam and Luminance meter values

Dear all,
I am new to the radiation discourse group. I am Francesco and I am looking into using Raspberry to capture HDR images. In particular, I came across the piHDR github folder.
I used an LED panel of which I can change the white color temperature. First, I measured the panel with a Konica Minolta luminance meter, so I know the luminance value across the panel with a mesh of 3x3 cm:

I then positioned the LED panel with the Warm White setting in front of the SainSmart Wide Angle Fish-Eye Camera connected to a RASPBERRY PI 3 MODEL A+, at a distance of one meter in a completely dark room.
I changed the exposurebracket.py in the github piHDR because I want a very fast reaction of the camera with the following code:

camera.framerate = Fraction(1, 2)
    camera.iso = 100
    camera.exposure_mode = 'off'
    camera.awb_mode = 'off'
    #camera.awb_gains = (1.8,1.8)
    camera.awb_gains = (1.5,1.5)
    #camera.brightness = 40 #default is 50
    #0.8s exposure
    #camera.framerate = 1
    #camera.shutter_speed = 800000
    #0.2s exposure
    camera.framerate = 5
    camera.shutter_speed = 200000
    #0.05s exposure
    camera.framerate = 20
    camera.shutter_speed = 50000
    #0.0125s exposure
    #camera.framerate = 30
    #camera.shutter_speed = 12500
    #0.003125s exposure 
    camera.shutter_speed = 3125
    #0.0008s exposure
    #camera.shutter_speed = 800

I know that this could lead to an inaccuracy in the result. However, the three photos are as follows:

The hdr is then generated using hdrgen, in line with the run_hdrcapture.bsh:

Even though the Raspberry allows me to create the Falsecolor according to the code in run_hdrcapture.bsh, I moved the .hdr file to the Windows PC and used wxFalsecolor.

If I compare the two points with coordinates (18,9) and (18,12) of the previous table with the two point above I found a very high difference.
Considering that the function pcomb normally is used to correct the vignetting factor, I thought I could use the same function to apply a uniform correction factor. With the value -s set to 2.4 on a trial basis, I got the following result:
more consistent with the values I measured earlier with the luminance meter.
I repeated the steps, setting the colour of the LED to cool white:

  1. test with the luminance meter:
  2. generation of the .hdr:
  3. falsecolor without applying pcomb:
  4. falsecolor after applying pcomb with same value of -s previously identified:

    The difference between 1) and 4) for the two points in this case is obvious: this is not the right approach.
    So how can I solve and reduce the differences between the test values measured with the luminance meter and the values derived from the .hdr file recorded with the Pi camera considering different white color temperature of the LED panel?

Hi Frank,

Last year, I was working with the same device for taking HDR pictures of a daylit space, and I got the same difference in my results. Looking for an explanation for this issue, I found that Raspberry’s CMOS sensor is usually very sensitive because it is small than CCD sensors. For this reason, to generate the images correctly, the sensor increments its sensitivity to allow enter more light (in straightforward words). This causes when you get the luminance values before calibrating the camera, you obtain higher luminance values than you expect.

To solve this issue, you need to calibrate the camera using the procedure of this article (https://www.tandfonline.com/doi/full/10.1080/15502724.2019.1684319). In the procedure (section 2.5 of the article) the authors show that you need to use the pcomb function to apply the calibration factor, as you did. I remember that I got calibration factors around 0.1 (i.e., the luminance values obtained from the picture were around ten times high than with the luminance sensor). You need to calculate this calibration factor following the procedure and then apply it to all your pictures.

To sum up, the difference between luminance obtained with the luminance meter and Raspberry’s CMOS sensor is expected due to the nature of the sensor.

Good luck with your measurements!

Thank you Daniel for your response.
I have already read the paper you linked. In fact I have inserted it as a hyperlink in my previous post. Anyway, I would like to know how it could be possible to consider the difference in the coefficient due to the different color temperature of the light.
Probably by considering also the -c factor differentiated for r, g and b? I have not found more information on this factor.

Hi Frank,

I hadn’t noticed that the same article was hyperlinked in your first post. Now I fully understand the problem you have. At least in my experience, I have only worked with this with natural light and it worked well for different scenes and tests that I did, as well as in the case of other researchers (see links at the end). Perhaps you could do some testing by doing the same experiment with a CCD camera, but it will probably be difficult if you don’t have one due to cost. In spite of everything, it could be a limitation with the spectral range of the CMOS sensor and could be possible to solve with -c factor, but other specialists in the area should confirm it.


LED illumination has a very “peaky” spectrum, with narrow bands of select wavelengths dominating the rest. While a good luminance probe that matches the CIE Y response curve (v_lambda) may give a reasonable estimate of luminance, deriving the same from a set of RGB camera filters is going to be a much greater challenge. And even if you were to calibrate the color space to match across white-point settings in the LED illuminator case, it probably would not transfer to other illumination conditions.

Although I can’t recall the exact papers discussing this issue, if you look through some sources such as the Color Imaging Conference, you may find references discussing LED panel illumination. Perhaps someone has worked out a solution to this issue.


Thank you @Greg_Ward for your reply and apologies for having to wait a few days for a response.
I know that LED has a “very spiky spectrum”, but it is also true that in our house in the evenings when the sun has gone down, LED lighting is now very common and, in some cases, is the only source of illumination.
The idea is to characterise the response of the low-cost camera by comparing the luminance values over a predefined range with a constant value monitored by a camera photometer and taking into account different light sources in a controlled environment (a sort of “darkroom” with only selected light sources). The panel LED is at the top, while at the bottom there is a cube in which different types of lamps with a classic E27 attack can be installed. Opposite is the low-cost cam and the professional DSLR. An example > here <. In this way, the S coefficient in pcomb can be determined for the different light sources. Then with a spectroradiometer, it is also possible to “classify” the different types of light and “directly” assign the correct S because, with the previous described test, I know the proper values for different kind of sources “pure” or not…
I would be very grateful if you would tell me frankly what you think about this approach.

If you measure the light sources directly in this way, rather than reflections from surfaces illuminated by the light sources, then a single calibration factor between your photometer and the camera should hold for that light source. I’m not sure what you would use that for, however.

1 Like

hello Frank,
I read your post about luminance measurement using hdr capture i want to ask you about the methode of writing python script that allows you to calculate luminance with [cd/m^2].

For more details on the findings of this research, please see this Linkedin post, which summarises the full content and includes links to the RG or publisher’s pages.