I'm starting to use Radiance in research on visual perception of 3D shape. The .pic file format assigns each colour channel one byte, plus a common exponent for all three, resulting in about 1% precision. When testing peoples' ability to detect faint patterns, 1% isn't precise enough, e.g., humans can detect sine wave patterns at less than 1% contrast, but in the .pic format, a 1% amplitude sine wave would come out more like a square wave.

From a quick look through rpmain.c and rpict.c, it looks like the the floating-point COLOR struct is used throughout the rendering calculations, and that the conversion to COLR is made at the very end, in fwritescan(), when writing the .pic file. This suggests that I could just replace fwritescan() with a routine that writes the file at a higher precision.

My question is this: are there other factors in the rendering calculations (rounding, etc.) that limit the precision to around 1% anyway?

Thanks,

Richard

Hi Richard,

As long as you are not relying on textures that derive from images of any kind, there is no inherent limitation in the precision of the lighting calculations in Radiance -- everything is carried out in double-precision floating point. However, the accuracy of any Monte Carlo calculation is going to depend on a lot of different factors, such as number of samples and so on, and if you don't pass the output to pfilt to filter the result, you are likely to have samples with occassional large discrepancies to their neighboring pixels.

Avoiding these variances requires turning off Monte Carlo sampling as much as possible, which probably requires no diffuse interreflection and only purely specular (polished) or diffuse surfaces. I can give you hints on how to do this if that is what you are after.

To get out a floating-point image without going into the 32-bit/pixel RGBE representation, you can use rtrace instead of rpict, like so:

% vwrays -vf view.vf -x 1024 -y 1024 -ff | rtrace -ff -h [options] octree > picture.flt

This will produce a raw floating-point image with 96-bits/pixel (RGB floats in scanline order). The size of the image can be determined by running "vwrays -vf view.xf -x 1024 -y 1024 -d" as it won't be 1024x1024 unless you give it a square view, as vwrays produces square pixels by default.

-Greg

## ···

From: Richard Murray <rfmurray@sas.upenn.edu>

Date: February 24, 2005 2:26:24 PM PST

I'm starting to use Radiance in research on visual perception of 3D shape. The .pic file format assigns each colour channel one byte, plus a common exponent for all three, resulting in about 1% precision. When testing peoples' ability to detect faint patterns, 1% isn't precise enough, e.g., humans can detect sine wave patterns at less than 1% contrast, but in the .pic format, a 1% amplitude sine wave would come out more like a square wave.

From a quick look through rpmain.c and rpict.c, it looks like the the floating-point COLOR struct is used throughout the rendering calculations, and that the conversion to COLR is made at the very end, in fwritescan(), when writing the .pic file. This suggests that I could just replace fwritescan() with a routine that writes the file at a higher precision.

My question is this: are there other factors in the rendering calculations (rounding, etc.) that limit the precision to around 1% anyway?

Thanks,

Richard

Date: Thu, 24 Feb 2005 19:03:12 -0800

From: Gregory J. Ward <gregoryjward@gmail.com>

Subject: Re: [Radiance-general] .pic precision

To: Radiance general discussion <radiance-general@radiance-online.org>

Hi Richard,

As long as you are not relying on textures that derive from images of

any kind, there is no inherent limitation in the precision of the

lighting calculations in Radiance -- everything is carried out in

double-precision floating point. However, the accuracy of any Monte

Carlo calculation is going to depend on a lot of different factors,

such as number of samples and so on, and if you don't pass the output

to pfilt to filter the result, you are likely to have samples with

occassional large discrepancies to their neighboring pixels.

Avoiding these variances requires turning off Monte Carlo sampling as

much as possible, which probably requires no diffuse interreflection

and only purely specular (polished) or diffuse surfaces. I can give

you hints on how to do this if that is what you are after.

Yes, that would be very helpful, thanks.

To get out a floating-point image without going into the 32-bit/pixel

RGBE representation, you can use rtrace instead of rpict, like so:

% vwrays -vf view.vf -x 1024 -y 1024 -ff | rtrace -ff -h [options]

octree > picture.flt

Exactly what I'm looking for. Thanks so much.

Richard

## ···

This will produce a raw floating-point image with 96-bits/pixel (RGB

floats in scanline order). The size of the image can be determined by

running "vwrays -vf view.xf -x 1024 -y 1024 -d" as it won't be

1024x1024 unless you give it a square view, as vwrays produces square

pixels by default.

-Greg

Avoiding these variances requires turning off Monte Carlo sampling as

much as possible, which probably requires no diffuse interreflection

and only purely specular (polished) or diffuse surfaces. I can give

you hints on how to do this if that is what you are after.

Yes, that would be very helpful, thanks.

OK, let's start with a set of rendering (rtrace) options to use:

-dt 0 -dj 0 -sj 0 -ab 0

This turns off direct, specular, and diffuse interreflection sampling, so you won't see any MC noise from those. (Technically, the -dt 0 is not really necessary, as it uses a mostly deterministic algorithm, but I put it in there to be on the safe side.) Next, you need to eliminate sources of noise in your scene. This means sticking to simple materials like plastic, metal, glass, and light, and avoiding patterns and textures with random or complex behaviors. Certain ones will be OK, but explaining which and why would take too much space, so it's best to ask when you need one.

That should get you started well enough.

-Greg