When displaying a rendered image that reproduces real space on a display, it is not possible to accurately reproduce luminance and color.

【Background】
I am currently a student majoring in architectural light environment at university.
In my research, I have found that there is a difference between “when subjects evaluate the light environment by observing the actual space” and "when subjects evaluate it by presenting a rendered image on a monitor that reproduces the actual space ".
Therefore, in order to reproduce the actual space on the monitor, it is necessary to convert the color to match the output (monitor) in addition to the usual rendering using Radiance.
In the process, I have referred to "Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries” Witten by W.Greg.

【Our Color conversion STEP】
1.Before rendering
1-1 Measure all the materials in the room with a spectrophotometer under the CIE standard light source (D65)
1-2 Convert the materials to Sharp RGB space and render in Radiance.
2. After rendering
2-1 Convert the rendered image with the color information in Sharp RGB space to the color of the color space (P3-D65) of the monitor
2-2 Simultaneously perform gamma correction to match the monitor settings .

The specific formula is shown below.
【STEP1-2 ,Convert to Sharp RGB color space】

図1
Figure 1. an example of measured XYZ under D65


Figure 2. Formula for converting from measured XYZ to sharp XYZ
・Mcat:a transfer matrix (XYZ->RGB), which is same as Mc in" Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries"
・Mcat^-1 : a transfer matrix (RGB ->XYZ), same as Mcat Inverse
・Rsw Gsw Bsw:Coordinates of the (D65) white point of the light source
・Xsharp Ysharp Zsharp: XYZ values in sharp RGB


Figure 3: Formula for calculating Radiance RGB from Sharp XYZ・

【STEP1-2, Commands for rendering】
・mkpmap -apg ****.gpm 50k -t 10 ****.oct
・rpict -vp 1500 0 1000 -vd 0 500 0 -vv 120 -vh 120 -ap ****.gpm 500 -vu 0 0 1 -ab 1 -aa .1 -ar 4096 -ad 4096 -as 2048 -ds .02 -dc 0.17 -dt 0.05 -dj 0 -pj .5 -ps 1 -dp 2048 -x 800 -y 800 -t 10 > RenderedImage.hdr

The image created by the procedures in STEP1 is shown here.


Figure 4. Image after rendering

【STEP2-1/2-2 Color Transformation and Gamma Correction Commands】
pvalue -o -h -H -d RenderedImage.hdr
|rcalc -e ‘$1=(1.5540*$1-0.4456*$2-0.1084*$3)^0.454545; $2=(-0.0080*$1+1.0219*$2-0.0139*$3)^0.454545; $3=(-0.0105*$1-0.0274*$2+1.0377*$3)^0.454545;’
|rcalc -e ‘$1=(0.81563331*$1+0.047154779*$2+0.137216627*$3); $2=(0.379114399*$1+0.576942425*$2+0.04400087*$3); $3=(-0.012260137*$1+0.016743052*$2+0.99551876*$3);’
|pvalue -r -pXYZ -y 800 +x 800 -d -h -H -o |ra_xyze -r -p 0.680 0.320 0.265 0.690 0.150 0.060 0.3127 0.3290> ConvertedImage.hdr
・The coefficient of the first “rcalc” is from Figure 5.6, converting the rendered image to the P3 color space of the display.
・Gamma correction is simultaneously multiplied by (1/2.2 = 0.454545) to match the setting of the monitor.
・The second coefficient of “rcalc” is MC^-1, and XYZ to RGB conversion is performed.
・Since the monitor is in the P3 color space, the (-p 0.680 0.320 0.265 0.690 0.150 0.060 0.3127 0.3290) in “pvalue” uses the primary color of it.

図5
Figure 5. Equation used as a reference when converting to a display color space P3.


Figure 6. the part of Matrix in Figure 5

・Rdw Gdw Bdw:the RGB coordinates of white point D65
・Rcalc Gcalc Bcalc:RGB per pixel of the image
・Rdisp Gdisp Bdisp:RGB values according to the monitor (display)
・MD:In this case, adjusted to P3 as shown in Figure 5.

Through the color conversion in step 2, I got a converted image (Figure.7) but it is uneven (as shown in Fig8, the discomfort is easier to understand when viewed in False Color.) and has a strong reddish tint, which is far from the actual space.


Figure 7. Image after color transformation

図8
Fig8. A false color of the converted image.

It’s a long story, but my questions are as follows;

  1. Are the rendering procedures correct?
  2. What is the reason why the image after conversion is too red?
  3. What is the cause of the unevenness (wall circle) that appears after color conversion?
  4. Is the timing of gamma correction correct?

Thank you very much for your time.

The paper you cited addresses color accuracy, not appearance. If you have reason to believe that you are losing color accuracy due to the interaction of saturated colors or “spikey” light source spectra, then this is indeed a valid approach, though it is clear you have made some error(s) in the translation.

However, the first option I would try before such a complicated procedure would be pcond, whose purpose is to map the computed HDR (high dynamic range) image to a standard SDR (standard dynamic range) display. This is the more usual cause of visual discrepancies.

The algorithms underlying pcond are documented in this paper:

Larson, G.W., H. Rushmeier, C. Piatko, “A Visibility Matching Tone Reproduction Operator for High Dynamic Range Scenes,” IEEE Transactions on Visualization and Computer Graphics, Vol. 3, No. 4, December 1997.

Cheers,
-Greg

Hi Greg,

Thank you for your quick reply. The main purpose of my study is “to confirm the correspondence between the light environment perceived by a person when observing the real space and the image displayed on the monitor”.

With reference to your suggestions, I converted the image rendered in the sharp RGB color space shown in Fig. 4 with the following command.

pcond -h -p 0.680 0.320 0.265 0.690 0.150 0.060 0.3127 0.3290 -u 300 RenderdImage.hdr | ximage

Then, the image like the one shown below was output, and it looked like the real space. Also, the following new questions arose.

  1. Given that this image correctly reflects human respons, should the image input to pcond be rendered in a more general color space instead of the Sharp RGB?
  2. In that case, which color space is best to use?
  3. How should I deal with gamma correction? (When displaying the image, I am currently considering the best solution instead of ximage, and I am planning to use HDR compatible software such as Photoshop or Photosphere.)

Thank you for your cooperation.

The procedure you have is much too complicated, and includes gamma correction in the wrong places. This is usually handled by the image converters, such as ra_tiff. Radiance HDR images should always be linear, no gamma at all. This is no doubt part of your problem, but I really don’t want to spend the time debugging your implementation as I believe it is unnecessary to meet your overall requirement of reproducing the perceived environment. This has much more to do with luminance tone-mapping than color accuracy.

So, I recommend you render your scene using RGB colors in your monitor’s color space. Then, run pcond -h with no -p option and convert the result with ra_tiff or ra_bmp, or use ximage for display. From there, you can decide what needs adjusting.

Hi Greg

Thanks for the reply. And I’m sorry for making a difficult request to you.
As advised, I will first render in the color space of my monitor, and then reflect human responses with pcond etc. It is very helpful to have many commands!! I will continue to use Radiance as the best tool for lighting simulations.

Best regards and many thanks

No problem. It’s just that starting from an advanced color rendering method is very much “learning to swim by diving in the deep end.” Few people have even attempted this procedure, since it involves modifications to the input as well as the output, and not that many situations really call for it. That’s why I recommend starting with the usual approach to the problem and seeing where you need to go from there.

Best,
-Greg