specifying sources

hi there
i'm after a bit more advice about light sources than i can find in th edocumentation, and i was wondering if anyone had any suggestions...

firstly, i want to specify the brightness of my light sources accurately. i am using photos (hdr images assembled from several photos) to specify the output distribution of the source (theatre spotlights) and i know the "light output in lux" at various distances from the lens (from here: http://www.seleconlight.com/english/support/english/acclaim%20pc.pdf ) and i assume that they give the figure for the centre of the beam, rather than the average over the whole field.
what i'm stuck on is what figures i should use for the (rgb) brightness in the rad file description of the "light" material.

i need the light output of these sources to be physically accurate, because i want to compare them to models using ies-data described distributions.
i would like to be able to colour correct these sources (i believe the ies2rad will do this automatically) given the colour temperature of the bulb.

secondly i want to colour my lights, as though they had colour filters in front of the lense, and i dont really understand the colour space that radiance uses. can anyone point me towards any resources on this?
i want to model the colours of commercially available filters (such as this one: http://www.leefilters.com/LPFD.asp?PageID=248 ) but i cant find any RGB transmission data for them. is it possible to convert the XYZ values given on the data sheet to RGB?

thanks for your help again
will

Hi Will,

firstly, i want to specify the brightness of my light sources accurately. i am using photos (hdr images assembled from several photos) to specify the output distribution of the source (theatre spotlights) and i know the "light output in lux" at various distances from the lens (from here: http://www.seleconlight.com/english/support/english/acclaim%20pc.pdf ) and i assume that they give the figure for the centre of the beam, rather than the average over the whole field.
what i'm stuck on is what figures i should use for the (rgb) brightness in the rad file description of the "light" material.

I don't have a ready example to offer, but the basic idea is to apply the HDR capture as a pattern to your light source, indexing the image a perspective projection:

void colorpict spot_dist
7 red green blue capture.hdr .
    (-Dx/Dz)/A1+.5 (Dy/Dz)/A1+.5
0
1 0.404026

spot_dist light spot_output
0
3 3881 3881 3881

The value given in the colorpict's first argument (A1) is the tangent of the subtended horizontal angle. The image (capture.hdr) is assumed to be square.

To get the right output value for your lamp, you can create as scene to match the lux measurement you have and render it in rvu with the -i option, having it report on the illuminance for a surface at the same distance with the 'trace' command. Then, apply a correction equal to the ratio between this value and the measured one to your light color (3881 in the example above).

i need the light output of these sources to be physically accurate, because i want to compare them to models using ies-data described distributions.
i would like to be able to colour correct these sources (i believe the ies2rad will do this automatically) given the colour temperature of the bulb.

Rendering with an RGB color isn't very accurate and won't look right unless you apply a more sophisticated spectral technique, such as the one described in the paper:

  Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries

secondly i want to colour my lights, as though they had colour filters in front of the lense, and i dont really understand the colour space that radiance uses. can anyone point me towards any resources on this?
i want to model the colours of commercially available filters (such as this one: http://www.leefilters.com/LPFD.asp?PageID=248 ) but i cant find any RGB transmission data for them. is it possible to convert the XYZ values given on the data sheet to RGB?

The following command will take XYZ values on its input (triplets) and produce Radiance RGB values on its output:

  rcalc -f ray/src/cal/xyz_rgb.cal -e '$1=R($1,$2,$3);$2=G($1,$2,$3);$3=B($1,$2,$3);'

Expect to be disappointed by the results if you don't apply white-balancing afterwards.

-Greg

Hi
thanks for your suggestions greg, though i'm afraid i'm still having a bit of trouble getting to grips with this one!

I have my hdr capture sources working fine, and now i've got them scaled to emit a physically correct brightness.

but i'm still struggling with the colour rendering.
i have converted my CIE XYZ values to radiance RGB primaries, and rendered a test scene. the colour is a bit out for what the filter should be, but not wildly i dont think.
i have read your paper though, and i would like to try the rendering in a diferent colour space, for the sake of comparision, and also to correct the white point.

converting to sharp RGB:
-i think i can just change some constants in the xyz_rgb.cal file, (values for sharp RGB are given in Sharp.cal) to change the clour space.
-i assume that all colours (ie of materials as well as sources) have to be converted?
-the rendered picture will then be in th sharp RGB colour space, which ximage does not use, so every pixel in the image then has to be converted to back to the radiance RGB primaries. is this right? if so, is there a utility to apply the transformation to a .pic file or do i need to write a script?

white point adjustment:
-presumably the white point that radiance uses is the same as most monitors (D65 i think) so i need to transform from the white point of the XYZ colour space to that of the sharp RGB (or radiance RGB) colour space, but only because of this change in colour space.
-can anyone start me off with how to apply the white point transformation? i assume it uses the vonKries.cal file, and that the inputs (initial and final white point cramaticities) are standard values for the colour spaces in question, but how is it used!?
-the XYZ data (and the xy coordinates) i have are given for both a C source (the CIE standard?) and for a tungsten lamp with colout temp 3200K, so i presume i should use the tungsten readings (as my lamps are tungsten).
-the sources i am modelling are tungsten, but i want an accurate representation of what the colour will look like, so presumably i need some sort of white point transformatin to correct their slight orangeness, as the eye is quite good at correcting this, i believe. how should i specify this? i know the lamps' colour temperature (about 3200K).

and finally, how should i apply the colour to my sources?
-i have XYZ tristimulus and xy cromaticity coordinate data for the filters i want to use, but i'm not sure of the best way to apply it.
(data comes from eg. http://www.leefilters.com/LPFD.asp?PageID=248 )
-i could either the calculated RGB values to scale the output from my source primative (though would i have to take the Y value (reflectance/transmission depending on the context i believe) into account, or is this done automatically in the conversion?). This is what i did for my first test run earlier (converting XYZ values to radiance RGB primaries, and then not using the Y value again), and the result seemed to bright for the filter i was using (i had 2 sources in the scene, one coloured, the other not, and they were both of similar brigtness).
-or i could use the RGB values to define the transmission of a thin piece of glass, and place this in front of the source (as a real filter is).

i'm sorry to keep asking long questions, but thanks once again for the help, its proved invaluable in building up my understanding Radiance, but its also really kindled my interest in the whole area!
will

Gregory J. Ward wrote:

···

Hi Will,

firstly, i want to specify the brightness of my light sources accurately. i am using photos (hdr images assembled from several photos) to specify the output distribution of the source (theatre spotlights) and i know the "light output in lux" at various distances from the lens (from here: http://www.seleconlight.com/ english/support/english/acclaim%20pc.pdf ) and i assume that they give the figure for the centre of the beam, rather than the average over the whole field.
what i'm stuck on is what figures i should use for the (rgb) brightness in the rad file description of the "light" material.

I don't have a ready example to offer, but the basic idea is to apply the HDR capture as a pattern to your light source, indexing the image a perspective projection:

void colorpict spot_dist
7 red green blue capture.hdr .
        (-Dx/Dz)/A1+.5 (Dy/Dz)/A1+.5
0
1 0.404026

spot_dist light spot_output
0
3 3881 3881 3881

The value given in the colorpict's first argument (A1) is the tangent of the subtended horizontal angle. The image (capture.hdr) is assumed to be square.

To get the right output value for your lamp, you can create as scene to match the lux measurement you have and render it in rvu with the - i option, having it report on the illuminance for a surface at the same distance with the 'trace' command. Then, apply a correction equal to the ratio between this value and the measured one to your light color (3881 in the example above).

i need the light output of these sources to be physically accurate, because i want to compare them to models using ies-data described distributions.
i would like to be able to colour correct these sources (i believe the ies2rad will do this automatically) given the colour temperature of the bulb.

Rendering with an RGB color isn't very accurate and won't look right unless you apply a more sophisticated spectral technique, such as the one described in the paper:

    Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries

secondly i want to colour my lights, as though they had colour filters in front of the lense, and i dont really understand the colour space that radiance uses. can anyone point me towards any resources on this?
i want to model the colours of commercially available filters (such as this one: http://www.leefilters.com/LPFD.asp?PageID=248 ) but i cant find any RGB transmission data for them. is it possible to convert the XYZ values given on the data sheet to RGB?

The following command will take XYZ values on its input (triplets) and produce Radiance RGB values on its output:

    rcalc -f ray/src/cal/xyz_rgb.cal -e '$1=R($1,$2,$3);$2=G($1,$2,$3); $3=B($1,$2,$3);'

Expect to be disappointed by the results if you don't apply white- balancing afterwards.

-Greg

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Hi Will,

converting to sharp RGB:
-i think i can just change some constants in the xyz_rgb.cal file, (values for sharp RGB are given in Sharp.cal) to change the clour space.

You can actually run the same command, inserting -f Sharp.cal after -f xyz_rgb.cal (putting it before would do nothing).

-i assume that all colours (ie of materials as well as sources) have to be converted?

Yes, in fact to get the full benefit of the method, you should start with spectral data and premultiply the spectral power and reflectance of each surface material if it's available.

-the rendered picture will then be in th sharp RGB colour space, which ximage does not use, so every pixel in the image then has to be converted to back to the radiance RGB primaries. is this right? if so, is there a utility to apply the transformation to a .pic file or do i need to write a script?

Here is one of the missing bits in the process. Radiance doesn't care what color space it renders in, but the default of downstream picture tools is to use the standard primaries defined in color.h if none are specified in the picture file. Therefore, it is best if you specify the correct color primaries following your rendering using a script like so:

#!/bin/csh -f
# Add Sharp color primaries to Radiance picture header
foreach f ($*)
  getinfo < $f > tf$$
  ed - tf$$ << '_EOF_'
/^PRIMARIES=/d
/^FORMAT=/i
PRIMARIES= .6898 .3206 .0736 .9003 .1166 .0374 .3333 .3333
.
w
q
'_EOF_'
  getinfo - < $f >> tf$$
  mv tf$$ $f
end

You can accomplish the same thing manually using the "vinfo" script that comes with Radiance, but the above lets you convert a set of pictures more quickly and conveniently. I would call it addSharp or something like that.

Once your Radiance picture has the right primaries installed in its header, some but not all of the Radiance tools will know what to do with them. To convert the image back to the standard Radiance color space, you can use:

  ra_xyze -r sharp.pic standard.pic

Obviously, this would also work with an XYZE input picture. The converters pcond, ra_bmp, and ra_tiff also understand the PRIMARIES= header line, as does Photosphere, which will display the images correctly without preconversion.

white point adjustment:
-presumably the white point that radiance uses is the same as most monitors (D65 i think) so i need to transform from the white point of the XYZ colour space to that of the sharp RGB (or radiance RGB) colour space, but only because of this change in colour space.

Radiance uses an equal-energy white (x,y)=(1/3,1/3) by default, and this is the default for the Sharp color space and XYZ as well, so no need to convert the white point for Radiance.

-can anyone start me off with how to apply the white point transformation? i assume it uses the vonKries.cal file, and that the inputs (initial and final white point cramaticities) are standard values for the colour spaces in question, but how is it used!?

Thankfully, ra_xyze and the other tools do a vonKries-style white point conversion for you, so you needn't worry about this bit.

-the XYZ data (and the xy coordinates) i have are given for both a C source (the CIE standard?) and for a tungsten lamp with colout temp 3200K, so i presume i should use the tungsten readings (as my lamps are tungsten).

Yes, indeed.

-the sources i am modelling are tungsten, but i want an accurate representation of what the colour will look like, so presumably i need some sort of white point transformatin to correct their slight orangeness, as the eye is quite good at correcting this, i believe. how should i specify this? i know the lamps' colour temperature (about 3200K).

The CIE (x,y) chromaticity for a 3200K black body is (0.4234,0.3990). You can use the formula in blackbody.cal to convert color temperature to spectral power, and then the CIE standard observers to get to XYZ.

and finally, how should i apply the colour to my sources?

You can premultiply the illuminant and reflectance values (using Sharp RGB) and put neutral light sources for your dominant source (tungsten 3200K). This is spelled out towards the end of section 2 in the EGWR paper.

-i have XYZ tristimulus and xy cromaticity coordinate data for the filters i want to use, but i'm not sure of the best way to apply it.
(data comes from eg. http://www.leefilters.com/LPFD.asp?PageID=248 )

Convert to Sharp RGB and multiply this value if you don't have spectral data. If you have spectral data for the filters, you are better off using that. (I see they have a plot on their website, so maybe they'll give you data as well.)

-i could either the calculated RGB values to scale the output from my source primative (though would i have to take the Y value (reflectance/transmission depending on the context i believe) into account, or is this done automatically in the conversion?). This is what i did for my first test run earlier (converting XYZ values to radiance RGB primaries, and then not using the Y value again), and the result seemed to bright for the filter i was using (i had 2 sources in the scene, one coloured, the other not, and they were both of similar brigtness).

Luminance should pass through the conversion with minimal changes, but this is only really guaranteed if you use properly normalized spectra rather than RGB multiplication.

-or i could use the RGB values to define the transmission of a thin piece of glass, and place this in front of the source (as a real filter is).

The net result would be (nearly) the same, but the glass would be more expensive to compute.

-Greg

greg - forgive me if im being a bit slow with this, but i've a few more questions!

so i think i understand the conversion to sharp RGB now. though i was under the impression that the sharp RGB colour space used the D65 white point, not the equal energy 1/3, 1/3 point (which is D50 i think?). guess i was wrong!

where i'm getting stuck is with specifying the sources.

1. using blackbody.cal:
-should i use this to calculate the emitted power for a selection of lambda values (presumeably using a separate script to input the lamda values)? if so, how do i then go about converting the results into XYZ values, so that i can then convert to the sharp rgb colour space? (presumably illumcal.csh uses the power vs. lamda data and CIE standard observers, but i dont follow how to use it! also, does it produce RGB or XYZ values?)
-or should i use it to calculate the cromaticity (x,y) coordinates? if so, can i then simply use rgb.cal (if this is what its equations do!) to convert the (x,y) cromaticity results to RGB values, ignoring the Y, "brightness", value?
-currently (ie before i apply any colour filters) my sources are simply white, i have ignored their colour temperature. when i applied the colour filteers i was going to use the XYZ data given for a 3200K source, rather than that measured with the C source. i was hoping that this would mean i do not have to add the effect of the colour of the source as well, is that correct? if not, then how do i model both the effect of the lamps colour temperature and the colour filter in my scene file, while maintaining the correct physical light output of the fixture? do i simply multiply the 3 (corrected intensity) output values by the colour temperature values, and then again by the colour filter values? and should i just worry about the R:G:B ratio for each of these, or do i need to multiply by the actual, calculated values?
would it be better to use several primitives (ie one for the colour temperature, one for the colour filter) to modify the colour of the source, rather than doing lots of multiplications before hand?

2.using neutral light sources
-i think i understand the principles of what you said in your paper, but i dont really follow how to implement this!
my scene has only one source in it, so this is what i understand i should do:
once i have calculated the sharp RGB values of the output of my source i should divide every material colour by the source colour (individually for r, g, b) and replace the source with a white one. should this white source be normalised to a value of 1 1 1, and the intensity of the source be taken into account in the premultiplication above, or should the intensity of the source left out of these premultiplications?
how do i then regain the colour information after the rendering? is it as simple as postmultiplying every pixel in the pixel by the inverse of the appropriate r, g or b value used in the pre-multiplication?

thanks again, and sorry for the barrage of questions!
will

Gregory J. Ward wrote:

···

Hi Will,

converting to sharp RGB:
-i think i can just change some constants in the xyz_rgb.cal file, (values for sharp RGB are given in Sharp.cal) to change the clour space.

You can actually run the same command, inserting -f Sharp.cal after - f xyz_rgb.cal (putting it before would do nothing).

-i assume that all colours (ie of materials as well as sources) have to be converted?

Yes, in fact to get the full benefit of the method, you should start with spectral data and premultiply the spectral power and reflectance of each surface material if it's available.

-the rendered picture will then be in th sharp RGB colour space, which ximage does not use, so every pixel in the image then has to be converted to back to the radiance RGB primaries. is this right? if so, is there a utility to apply the transformation to a .pic file or do i need to write a script?

Here is one of the missing bits in the process. Radiance doesn't care what color space it renders in, but the default of downstream picture tools is to use the standard primaries defined in color.h if none are specified in the picture file. Therefore, it is best if you specify the correct color primaries following your rendering using a script like so:

#!/bin/csh -f
# Add Sharp color primaries to Radiance picture header
foreach f ($*)
    getinfo < $f > tf$$
    ed - tf$$ << '_EOF_'
/^PRIMARIES=/d
/^FORMAT=/i
PRIMARIES= .6898 .3206 .0736 .9003 .1166 .0374 .3333 .3333
.
w
q
'_EOF_'
    getinfo - < $f >> tf$$
    mv tf$$ $f
end

You can accomplish the same thing manually using the "vinfo" script that comes with Radiance, but the above lets you convert a set of pictures more quickly and conveniently. I would call it addSharp or something like that.

Once your Radiance picture has the right primaries installed in its header, some but not all of the Radiance tools will know what to do with them. To convert the image back to the standard Radiance color space, you can use:

    ra_xyze -r sharp.pic standard.pic

Obviously, this would also work with an XYZE input picture. The converters pcond, ra_bmp, and ra_tiff also understand the PRIMARIES= header line, as does Photosphere, which will display the images correctly without preconversion.

white point adjustment:
-presumably the white point that radiance uses is the same as most monitors (D65 i think) so i need to transform from the white point of the XYZ colour space to that of the sharp RGB (or radiance RGB) colour space, but only because of this change in colour space.

Radiance uses an equal-energy white (x,y)=(1/3,1/3) by default, and this is the default for the Sharp color space and XYZ as well, so no need to convert the white point for Radiance.

-can anyone start me off with how to apply the white point transformation? i assume it uses the vonKries.cal file, and that the inputs (initial and final white point cramaticities) are standard values for the colour spaces in question, but how is it used!?

Thankfully, ra_xyze and the other tools do a vonKries-style white point conversion for you, so you needn't worry about this bit.

-the XYZ data (and the xy coordinates) i have are given for both a C source (the CIE standard?) and for a tungsten lamp with colout temp 3200K, so i presume i should use the tungsten readings (as my lamps are tungsten).

Yes, indeed.

-the sources i am modelling are tungsten, but i want an accurate representation of what the colour will look like, so presumably i need some sort of white point transformatin to correct their slight orangeness, as the eye is quite good at correcting this, i believe. how should i specify this? i know the lamps' colour temperature (about 3200K).

The CIE (x,y) chromaticity for a 3200K black body is (0.4234,0.3990). You can use the formula in blackbody.cal to convert color temperature to spectral power, and then the CIE standard observers to get to XYZ.

and finally, how should i apply the colour to my sources?

You can premultiply the illuminant and reflectance values (using Sharp RGB) and put neutral light sources for your dominant source (tungsten 3200K). This is spelled out towards the end of section 2 in the EGWR paper.

-i have XYZ tristimulus and xy cromaticity coordinate data for the filters i want to use, but i'm not sure of the best way to apply it.
(data comes from eg. http://www.leefilters.com/LPFD.asp?PageID=248 )

Convert to Sharp RGB and multiply this value if you don't have spectral data. If you have spectral data for the filters, you are better off using that. (I see they have a plot on their website, so maybe they'll give you data as well.)

-i could either the calculated RGB values to scale the output from my source primative (though would i have to take the Y value (reflectance/transmission depending on the context i believe) into account, or is this done automatically in the conversion?). This is what i did for my first test run earlier (converting XYZ values to radiance RGB primaries, and then not using the Y value again), and the result seemed to bright for the filter i was using (i had 2 sources in the scene, one coloured, the other not, and they were both of similar brigtness).

Luminance should pass through the conversion with minimal changes, but this is only really guaranteed if you use properly normalized spectra rather than RGB multiplication.

-or i could use the RGB values to define the transmission of a thin piece of glass, and place this in front of the source (as a real filter is).

The net result would be (nearly) the same, but the glass would be more expensive to compute.

-Greg

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Hi Will,

1. using blackbody.cal:
-should i use this to calculate the emitted power for a selection of lambda values (presumeably using a separate script to input the lamda values)? if so, how do i then go about converting the results into XYZ values, so that i can then convert to the sharp rgb colour space? (presumably illumcal.csh uses the power vs. lamda data and CIE standard observers, but i dont follow how to use it! also, does it produce RGB or XYZ values?)

I gave you the result for a 3200K black body. To compute the same for other temperatures, illumcal.csh should work. I get weird results for the correlated color temperature, and I think there is something wrong with that part of the calculation, but the CIE (x,y) values seem OK. If I run:

( echo '# Black Body at 3200K' ; cnt 100 | rcalc -e 'lambda=(780-350)/99*$1+350' -f blackbody.cal
-e temp:3200 -e '$1=lambda;$2=u(lambda,temp)' ) > bb3200.dat
csh -f illumcal.csh bb3200.dat

I get nearly the same results as before.

-or should i use it to calculate the cromaticity (x,y) coordinates? if so, can i then simply use rgb.cal (if this is what its equations do!) to convert the (x,y) cromaticity results to RGB values, ignoring the Y, "brightness", value?

CIE (x,y) chromaticity coordinates can be converted to CIE XYZ if you have a Y value using:

X = x/y*Y;
Z = (1-x-y)/y*Y;

Similarly, you can get CIE (x,y) from XYZ using:

x = X/(X+Y+Z);
y = Y/(X+Y+Z);

-currently (ie before i apply any colour filters) my sources are simply white, i have ignored their colour temperature. when i applied the colour filteers i was going to use the XYZ data given for a 3200K source, rather than that measured with the C source. i was hoping that this would mean i do not have to add the effect of the colour of the source as well, is that correct?

Yes, that is correct. It also means that they should have done you the favor of premultiplying the source and filter spectra, so you don't have to.

2.using neutral light sources
-i think i understand the principles of what you said in your paper, but i dont really follow how to implement this!
my scene has only one source in it, so this is what i understand i should do:
once i have calculated the sharp RGB values of the output of my source i should divide every material colour by the source colour (individually for r, g, b) and replace the source with a white one. should this white source be normalised to a value of 1 1 1, and the intensity of the source be taken into account in the premultiplication above, or should the intensity of the source left out of these premultiplications?

You should premultiply, not divide, your source and scene colors, and preferably do this over each wavelength of the visible spectrum. If you don't have spectral data, then yes, you will just be multiplying the (Sharp) RGB values. The intensity of the source should not be taken into the scene colors, so that you don't have reflectances greater than 1, which would be a disaster.

how do i then regain the colour information after the rendering? is it as simple as postmultiplying every pixel in the pixel by the inverse of the appropriate r, g or b value used in the pre-multiplication?

You don't need to "regain" the color information, as you have effectively accomplished a vonKries white balance during rendering.

-Greg

greg
i think i'm starting to follow you now!
just a couple more things...

-converting radiance RGB to sharp RGB (all my material reflectances are in RGB, eg from the materials file in lib/) and i'm not sure how to convert them to sharp rgb. i thought i could use xyz_rgb.cal, by converting RGB to XYZ, then the XYZ values to sharp RGB. but i thought i'd check my idea by converting RGB to XYZ, and then converting it back again, thinking it should be the same, and its not.

-you say that a white popint adjustment is effectively made during the rendering. is this just the shift needed to account for the different viewing conditions (ie being in the room, and looking at the monitor)?

-premultiplying surface reflectances. the RGB values i am geting for the XYZ data i have for the filters are much bigger than 1, so premultiplying will give reflectances much bigger than 1.
for example:

the RGB values for a certain red filter = 55, -3.6, -0.7 (from X,Y,Z=21.3, 9.1, 0) [i thought the range for these values was (o:inf)?]

the RGB reflectance for a white painted wall = 0.9, 0.8, 0.8

so the compensated RGB reflectances would be 49.5, -10.08, -0.56 (ie. 55*0.9 etc) But these should all be between 0 and 1 shouldn't they?

should i simply normalise the filter's RGB values to sum to 1, or am i missing something more important here?

thanks again
will

Gregory J. Ward wrote:

···

Hi Will,

1. using blackbody.cal:
-should i use this to calculate the emitted power for a selection of lambda values (presumeably using a separate script to input the lamda values)? if so, how do i then go about converting the results into XYZ values, so that i can then convert to the sharp rgb colour space? (presumably illumcal.csh uses the power vs. lamda data and CIE standard observers, but i dont follow how to use it! also, does it produce RGB or XYZ values?)

I gave you the result for a 3200K black body. To compute the same for other temperatures, illumcal.csh should work. I get weird results for the correlated color temperature, and I think there is something wrong with that part of the calculation, but the CIE (x,y) values seem OK. If I run:

( echo '# Black Body at 3200K' ; cnt 100 | rcalc -e 'lambda=(780-350)/ 99*$1+350' -f blackbody.cal
-e temp:3200 -e '$1=lambda;$2=u(lambda,temp)' ) > bb3200.dat
csh -f illumcal.csh bb3200.dat

I get nearly the same results as before.

-or should i use it to calculate the cromaticity (x,y) coordinates? if so, can i then simply use rgb.cal (if this is what its equations do!) to convert the (x,y) cromaticity results to RGB values, ignoring the Y, "brightness", value?

CIE (x,y) chromaticity coordinates can be converted to CIE XYZ if you have a Y value using:

X = x/y*Y;
Z = (1-x-y)/y*Y;

Similarly, you can get CIE (x,y) from XYZ using:

x = X/(X+Y+Z);
y = Y/(X+Y+Z);

-currently (ie before i apply any colour filters) my sources are simply white, i have ignored their colour temperature. when i applied the colour filteers i was going to use the XYZ data given for a 3200K source, rather than that measured with the C source. i was hoping that this would mean i do not have to add the effect of the colour of the source as well, is that correct?

Yes, that is correct. It also means that they should have done you the favor of premultiplying the source and filter spectra, so you don't have to.

2.using neutral light sources
-i think i understand the principles of what you said in your paper, but i dont really follow how to implement this!
my scene has only one source in it, so this is what i understand i should do:
once i have calculated the sharp RGB values of the output of my source i should divide every material colour by the source colour (individually for r, g, b) and replace the source with a white one. should this white source be normalised to a value of 1 1 1, and the intensity of the source be taken into account in the premultiplication above, or should the intensity of the source left out of these premultiplications?

You should premultiply, not divide, your source and scene colors, and preferably do this over each wavelength of the visible spectrum. If you don't have spectral data, then yes, you will just be multiplying the (Sharp) RGB values. The intensity of the source should not be taken into the scene colors, so that you don't have reflectances greater than 1, which would be a disaster.

how do i then regain the colour information after the rendering? is it as simple as postmultiplying every pixel in the pixel by the inverse of the appropriate r, g or b value used in the pre- multiplication?

You don't need to "regain" the color information, as you have effectively accomplished a vonKries white balance during rendering.

-Greg

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Greg
i have been thinking a bit more about this idea of rendering with cloured light sources, and i've realised i'm still a bit confused!

where i get lost is why we should get better accuracy by using a white source, and modelling the colour we want the source to have into the reflection calculation from the surfaces in the scene.

in your paper you say this avoids a colour cast in the rendered image, but why is this? presumably it wouldn't be that difficult to apply a white-point shift to the rendered image to correct this colour cast?

but as well as this colour cast, your results show quite a dramatic difference in accuracy between the naive and premultiplied renderings.
i appreciate that you probably don't have time to go into a detailed physics lesson by email, but can you point me towards anywhere i can read up on why this is?

i want to render lots of images of the same scene, but to change the single light source each time. so far i have written a script which does this quite nicely (it turned out simpler than using rtcontrib, because of the way i wanted to structure the scene description), but i have only used white sources.
i want to be able to use colour filters with my sources, but if i do all the premultiplicaton i will have to do it for every different colour filter i want to use, which will complicate the autamation of the preoccess immensely, so i'm trying to think of simpler ways to do it!
i guess i might have to do without, and just accept a less slightly accurate image.

thanks
will

Gregory J. Ward wrote:

···

Hi Will,

1. using blackbody.cal:
-should i use this to calculate the emitted power for a selection of lambda values (presumeably using a separate script to input the lamda values)? if so, how do i then go about converting the results into XYZ values, so that i can then convert to the sharp rgb colour space? (presumably illumcal.csh uses the power vs. lamda data and CIE standard observers, but i dont follow how to use it! also, does it produce RGB or XYZ values?)

I gave you the result for a 3200K black body. To compute the same for other temperatures, illumcal.csh should work. I get weird results for the correlated color temperature, and I think there is something wrong with that part of the calculation, but the CIE (x,y) values seem OK. If I run:

( echo '# Black Body at 3200K' ; cnt 100 | rcalc -e 'lambda=(780-350)/ 99*$1+350' -f blackbody.cal
-e temp:3200 -e '$1=lambda;$2=u(lambda,temp)' ) > bb3200.dat
csh -f illumcal.csh bb3200.dat

I get nearly the same results as before.

-or should i use it to calculate the cromaticity (x,y) coordinates? if so, can i then simply use rgb.cal (if this is what its equations do!) to convert the (x,y) cromaticity results to RGB values, ignoring the Y, "brightness", value?

CIE (x,y) chromaticity coordinates can be converted to CIE XYZ if you have a Y value using:

X = x/y*Y;
Z = (1-x-y)/y*Y;

Similarly, you can get CIE (x,y) from XYZ using:

x = X/(X+Y+Z);
y = Y/(X+Y+Z);

-currently (ie before i apply any colour filters) my sources are simply white, i have ignored their colour temperature. when i applied the colour filteers i was going to use the XYZ data given for a 3200K source, rather than that measured with the C source. i was hoping that this would mean i do not have to add the effect of the colour of the source as well, is that correct?

Yes, that is correct. It also means that they should have done you the favor of premultiplying the source and filter spectra, so you don't have to.

2.using neutral light sources
-i think i understand the principles of what you said in your paper, but i dont really follow how to implement this!
my scene has only one source in it, so this is what i understand i should do:
once i have calculated the sharp RGB values of the output of my source i should divide every material colour by the source colour (individually for r, g, b) and replace the source with a white one. should this white source be normalised to a value of 1 1 1, and the intensity of the source be taken into account in the premultiplication above, or should the intensity of the source left out of these premultiplications?

You should premultiply, not divide, your source and scene colors, and preferably do this over each wavelength of the visible spectrum. If you don't have spectral data, then yes, you will just be multiplying the (Sharp) RGB values. The intensity of the source should not be taken into the scene colors, so that you don't have reflectances greater than 1, which would be a disaster.

how do i then regain the colour information after the rendering? is it as simple as postmultiplying every pixel in the pixel by the inverse of the appropriate r, g or b value used in the pre- multiplication?

You don't need to "regain" the color information, as you have effectively accomplished a vonKries white balance during rendering.

-Greg

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Hi Will,

-converting radiance RGB to sharp RGB (all my material reflectances are in RGB, eg from the materials file in lib/) and i'm not sure how to convert them to sharp rgb. i thought i could use xyz_rgb.cal, by converting RGB to XYZ, then the XYZ values to sharp RGB. but i thought i'd check my idea by converting RGB to XYZ, and then converting it back again, thinking it should be the same, and its not.

This is the correct method. Converting to and from works for me:

% icalc xyz_rgb.cal
X(.351,.918,.015)
$1=0.480220278
Y(.351,.918,.015)
$2=0.709181111
Z(.351,.918,.015)
$3=0.134033796
R($1,$2,$3)
$4=0.351
G($1,$2,$3)
$5=0.918
B($1,$2,$3)
$6=0.015

You could have a problem if you go from XYZ through RGB to XYZ again, since RGB gets truncated to zero, where negative values do in fact exist for some visible XYZ colors.

-you say that a white popint adjustment is effectively made during the rendering. is this just the shift needed to account for the different viewing conditions (ie being in the room, and looking at the monitor)?

No, it really assumes that you are viewing under the same tungsten illuminant. To complete the white point conversion to some other adaptation, you need to apply the final matrix equation in Section 2 of the paper. This can be done using ra_xyze and the -p option if you happen to know your display's primaries, e.g.:

  % ra_xyze -r -p .603 .352 .289 .590 .146 .066 .319 .348 sharp.pic display.pic

This does not account for the surround condition, but I don't know what to do for that, either.

-premultiplying surface reflectances. the RGB values i am geting for the XYZ data i have for the filters are much bigger than 1, so premultiplying will give reflectances much bigger than 1.
for example:

the RGB values for a certain red filter = 55, -3.6, -0.7 (from X,Y,Z=21.3, 9.1, 0) [i thought the range for these values was (o:inf)?]

Nope, they should all be between 0 and 1. Since they are not, you should assume these are percentages and divide them by 100. This still yields a negative value for green, but I get a Sharp RGB of (0.2614, -0.0143, 0.0035). I'm not sure how you got your values -- are they not Sharp RGB? The xyz_rgb.cal file should truncate the green value to zero. Your XYZ value seems to be right on the edge of the visible boundary. A Z value of 0 is highly unusual.

the RGB reflectance for a white painted wall = 0.9, 0.8, 0.8

so the compensated RGB reflectances would be 49.5, -10.08, -0.56 (ie. 55*0.9 etc) But these should all be between 0 and 1 shouldn't they?

Yes, or nearly. Actually, you can have values slightly greater than 1 in some cases, but negative values won't even be storable in a Radiance picture unless you use some fancy means to render with rtrace and convert from RGB to XYZ prior to storing the picture.

should i simply normalise the filter's RGB values to sum to 1, or am i missing something more important here?

No, don't do that.

-Greg

where i get lost is why we should get better accuracy by using a white source, and modelling the colour we want the source to have into the reflection calculation from the surfaces in the scene.

It only gives you better accuracy if you premultiply the spectra. If you don't have spectral data, then it's the same as using the Sharp RGB color space, which will at least be an improvement over standard RGB (or XYZ, which is the worst).

in your paper you say this avoids a colour cast in the rendered image, but why is this? presumably it wouldn't be that difficult to apply a white-point shift to the rendered image to correct this colour cast?

That's what a white point correction does, but it does it in such a way that the colors still have close to the correct appearance. A naive normalization to some average gray does the wrong thing, and in some cases it is very wrong, indeed.

but as well as this colour cast, your results show quite a dramatic difference in accuracy between the naive and premultiplied renderings.
i appreciate that you probably don't have time to go into a detailed physics lesson by email, but can you point me towards anywhere i can read up on why this is?

Simply put, spectral peaks and troughs in the source and reflection spectra require more than the three dimensions of a standard color space to resolve. You only need three primaries to present (nearly) any color to the eye, since the eye only has three spectral sensitivity curves that it uses for color vision. (The rods represent a fourth sensitivity curve, but this does not seem to take part in distinguishing colors.) However, you theoretically need an infinite number of spectral samples to exactly simulate light interreflection. You also need an infinite number of rays, and we get by with fewer. As a practical matter, most rendering systems use only three colors, and the paper we've been discussing shows how to best leverage this machinery.

For a detailed description of color in computer graphics, Andrew Glassner's "Principle of Digital Image Synthesis" is an excellent reference. Roy Hall has a classic text on the subject as well, but I believe it's out of print.

i want to render lots of images of the same scene, but to change the single light source each time. so far i have written a script which does this quite nicely (it turned out simpler than using rtcontrib, because of the way i wanted to structure the scene description), but i have only used white sources.
i want to be able to use colour filters with my sources, but if i do all the premultiplicaton i will have to do it for every different colour filter i want to use, which will complicate the autamation of the preoccess immensely, so i'm trying to think of simpler ways to do it!
i guess i might have to do without, and just accept a less slightly accurate image.

As I said, it makes no difference to the result if you don't have spectral data whether you incorporate your source color into your surfaces or not. The result will be the same, so you might as well script it if you don't have spectra.

-Greg