I am trying to work out why the coefficient for converting r,g,b into a grey value are what they are:

L = 0.265R + 0.670G + 0.065B

I started off with the 1931 horseshoe diagram onto which I drew the Radiance colour gamut. The primitives are defined in src/common/color.h:

0.640 0.330 (red)

0.290 0.600 (green)

0.150 0.060 (blue)

0.333 0.333 (white)

http://luminance.londonmet.ac.uk/pickup/radiance_xy_primaries_with_lines_med.png

I then determined the dominant wavelength for r,g,b by projecting the x,y coordinates from the white point onto the perimeter of the horseshoe. This was rounded to the nearest 5 nm, because the data I had for the v(lambda) curve only had a 5 nm resolution:

http://luminance.londonmet.ac.uk/pickup/primaries_vlambda.png

The dominant wavelengths of the r,g,b primaries are drawn as arrows. The little coloured rings are the weighting coefficient as used by Radiance.

Since 1 = r + g + b, I normalised the values I got (where the little arrows hit the v(lambda) curve).

L = 0.326R + 0.635G + 0.039B

The derived coefficients are qualitatively similar to the Radiance ones in the sense that b < r < g, but I would have liked them to be closer in their absolute values, too. Although the methodology and drawings are a bit crude, I was hoping to get a better match.

Can anybody suggest a better solution?

Thanks

Axel