Radiance accuracy: comparison with AGI32 V1.7

I did a quick comparison between the accuracy of Radiance and AGI32 V1.7 regarding interreflections. See

www.personal.psu.edu/mum13/agi_rad.pdf (which I posted one week ago)

and then the error analysis at


Calculation parameters for Radiance are

rtrace -ab 1/2/3/4/5 -ad 512 -as 256 -ar 5000 -ds .1 octree ...

The error drops to around 5 % at 3 ambient bounces.

Martin Moeck, Penn State

Hi Martin,
I have a couple of questions about your study:
When you say that
"The error drops to around 5 % at 3 ambient bounces. "
do you mean "difference" between radiance and agi32?
did you calculate the analytical solution?
If you considered the agi32 solution to be the right one, what about
changing the mesh density or the number of iteration
does the solution converge to a unique value?

About the Radiance parameters you used:
"rtrace -ab 1/2/3/4/5 -ad 512 -as 256 -ar 5000 -ds .1 octree ..."
did you tested the scene while changing -aa and -ad? which is the -aa value
you used?
I personally would increase settings for -ad,-as,-aa and reduce -ar to keep
a balance speed/accuracy.
(-ar seems to me a little too high for your scene, compared to ad)
That's also interesting: time vs accuracy.

But at the end of the day: is it your scene particularly general?
I am thinking about what John recently wrote: probably the problem is not to
compare numbers for just one scene, but to understand the general validity
of a method: it's ability to solve a wide spectrum of problems (fabrics,
small windows, high number of interreflections, direct illumination,
penumbras.... more scenarios).
The danger is to (as he said) test our ability to use the program rather the
ability of the program itself to solve the problem.
I am really curious to read his new paper to find out something new in this

"Verification of Program Accuracy for Illuminance Modelling: Assumptions,
Methodology and an Examination of Conflicting Findings" (2004) Lighting Res.
Technol. (in press)



Electronic mail messages entering and leaving Arup business
systems are scanned for acceptability of content and viruses.

Interesting answer - see my response below.



-----Original Message-----
  From: Giulio Antonutto [mailto:[email protected]]
  Sent: Wed 7/7/2004 12:47 PM
  To: 'Radiance general discussion'
  Subject: RE: [Radiance-general] RE: Radiance accuracy: comparison with AGI32 V1.7
  I still think that you speak about differences and not error (even if you quote the relative error formula): you are comparing Radiance with AGI, not Radiance with Reality.
  Radiosity for luminous flux transfer for totally diffuse models, as the one modeled here, is "correct", if you are happy with a dense surface mesh. Insofar, AGI32 is correct, since it solves the radiosity matrix (almost), whereas software using Monte Carlo techniques such as Radiance are estimation tools for complex problems. Therefore, Radiance is an excellent tool to help lighting folks estimate the lighting distribution in complex environments, not calculate it.
  Regarding reality: Try to build a simple model with diffuse surfaces and one diffuse source for verification purposes. You can't do it unless you use an integrating sphere for the source. Try to make a surface black. Again, you can't do it unless you use caves or holes lined with black material. The effort is tremendous.
  In this sense what about translucent fabrics or arbitrary BDTRFs, in future you should consider also these materials in your comparison to be exhaustive (**)...
  No time, no money.
  If you think about that, what does it means "accuracy" when you are not able to model the majority of the real life instances and you are forced to assume a huge amount of restriction (purely lambertian, omidirectional....)?
  The majority of real life instances has characteristics that need to be estimated, whereas simple models have characteristics that are mostly known. That makes accurate calculation of real life instances not possible. Otherwise, we could predict the future.
  You may argue that the differences aren't so big... but as there are differences the results are still subject to accuracy.
  Do you need a program that is less precise but more general or a program that is really precise but very specialized?
  You need both, and we have both, and we use them depending on the problem.
  About the comparison itself:
  I still think that it would be worthy to try others radiance settings, may be quicker even if not as accurate as yours (you use a very high value of -ar, while you use the default -aa, not necessarily wrong or bad, just a little unusual to me).
  There is a formula to estimate the -ar value you need depending on the scene complexity (see RWR), although I am pretty sure that you know it already.
  If you do not want to spend a huge amount of time, you can always do the calcs with rtrace without picture (there is an example in RWR).
  These of course are only suggestions, this is your research.
  Again, these parameters are not very important for this comparison. The most important parameters for interreflection estimation are the number of ambient bounces, and the ambient density -ad.
  I agree with you (**) that an easy start consists in checking a program with simple scenarios in order to find main limitations and bugs, I just believe Radiance has gone further than that and it's time to define what does it mean to validate a program;
  Validation is just a comparison to known outcomes or results, isn't it? It does not really matter if they are complex or simple scenarios. However, we cannot validate very complex scenarios since we don't have enough accurate sensors and data.


Electronic mail messages entering and leaving Arup business

systems are scanned for acceptability of content and viruses.