Radiance accuracy: comparison with AGI32 V1.7


see below.



-----Original Message-----
  From: Giulio Antonutto [mailto:Giulio.Antonutto@arup.com]
  Sent: Wed 7/7/2004 5:20 AM
  To: 'Radiance general discussion'
  Subject: [Radiance-general] RE: Radiance accuracy: comparison with AGI32 V1.7
  "Hi Martin,
  I have a couple of questions about your study:
  When you say that
  "The error drops to around 5 % at 3 ambient bounces. "
  do you mean "difference" between radiance and agi32?"
  Error calculation = 100*ABS(AGI_value - Radiance_value)/AGI_value
  "did you calculate the analytical solution?"
  The cube has a reflectance of 0.5. Approximating a cube as a hemisphere, the total reflected flux can be approximated as follows:
  Flux reflected = rho * Flux emitted * (1 + rho + rho^2 + ... + rho^n) = flux emitted * rho / ( 1 - rho)
  rho = 0.5
  flux emitted = flux of PAR lamp = 860 lumens
  Therefore, the total reflected flux is also 860 lumens. Dividing by the total area of the cube: Illuminance due to reflected flux = 860 / (6*3*3) = 16 Lux. The ceiling illuminance must be a bit higher than this, as the floor opposite of the calculation points (insignificant cosine reduction) contains the bright spot caused by the PAR lamp. We can calculate it more precisely using the ring formula:
  dE = PI D^2 L 2 r dr / ( D^2 + r^2 )^2
  PI = 3.14159
  D = 3.0 m
  L = Luminance of rings on floor, incl. interreflections (11, 14, 18.5, 25, 33.5, 42.6, 48.6, 51, 53, 56, 58)
  r = ring radius from 1.45 to .45
  dr = .1 m = ring spacing
   This gives 19.4 Lux, almost in line with the AGI values.
  "If you considered the agi32 solution to be the right one, what about changing the mesh density or the number of iteration
  does the solution converge to a unique value?"
  Yes, it converges very quickly. No need to change the mesh density to anything higher. I tried it with very small patches, and it gave the same values.
  "About the Radiance parameters you used:
  "rtrace -ab 1/2/3/4/5 -ad 512 -as 256 -ar 5000 -ds .1 octree ..."
  did you tested the scene while changing -aa and -ad? which is the -aa value you used?"
  Not necessary. You can estimate what -ad needs to be. The probability that a few of 512 rays will hit the bright PAR spot on the floor is very high, no matter from what point in the cube. If it were Ronchamp by Le Corbusier with small deep windows, even -ad 10000 would not do. How many rays would you need from one secluded corner in that church to hit all or most of the small windows? That is fairly easy to guess. Subdivide the hemisphere surrounding the point in the corner into patches half the size of the windows, and you can derive the number of rays from the number of patches.
  "I personally would increase settings for -ad,-as,-aa and reduce -ar to keep a balance speed/accuracy.
  (-ar seems to me a little too high for your scene, compared to ad)
  That's also interesting: time vs accuracy."
  Yes, the solution with -ab 5 took about an hour, I think. -ar does not matter too much here.
  "But at the end of the day: is it your scene particularly general?"
  No. You might want to start with a scene that you can approximate with hand calculations.
  "I am thinking about what John recently wrote: probably the problem is not to compare numbers for just one scene, but to understand the general validity of a method: it's ability to solve a wide spectrum of problems (fabrics, small windows, high number of interreflections, direct illumination, penumbras.... more scenarios).
  The danger is to (as he said) test our ability to use the program rather the ability of the program itself to solve the problem.
  I am really curious to read his new paper to find out something new in this area:"
  Sure. You test the program for a variety of cases and find where it fails. You also have to use the Radiance parameters which, when increased, give the same results as the previous setting. I find them in general to be -ab 3 -ad 4096 -as 2048 -ar 5000 -ds .1 -dr 3 if new users don't know which values to use. That is for a space with furniture and some small lights. The ambient value -av must be estimated very well, because -ab should be 5 as a general rule.
  Other than that, our ability to use a program and its output is proportional to our ability to estimate or calculate what the results should be, I think.


Electronic mail messages entering and leaving Arup business

systems are scanned for acceptability of content and viruses.

Hi Martin,

this is interesting topic for me..

Id like to make identical scenes in Lightscape and 3ds max (using both
radiosity and mental ray) for the sake of comparisons.

Is it possible for you to send me (privately if you want) the IES file of
the lamp used in your tests? I am curious to see how theses products
perform with the same data.


Pierre-Felix Breton
Lighting consultant

(and Discreet 3d R&D designer)


----- Original Message -----
From: "Martin Moeck" <MMoeck@engr.psu.edu>
To: "Radiance general discussion" <radiance-general@radiance-online.org>
Sent: Wednesday, July 07, 2004 1:20 PM
Subject: [Radiance-general] RE: Radiance accuracy: comparison with AGI32

Interesting answer - see my response below.


-----Original Message-----
From: Giulio Antonutto [mailto:Giulio.Antonutto@arup.com]
Sent: Wed 7/7/2004 12:47 PM
To: 'Radiance general discussion'
Subject: RE: [Radiance-general] RE: Radiance accuracy: comparison with

AGI32 V1.7


I still think that you speak about differences and not error (even if you

quote the relative error formula): you are comparing Radiance with AGI, not
Radiance with Reality.

Radiosity for luminous flux transfer for totally diffuse models, as the

one modeled here, is "correct", if you are happy with a dense surface mesh.
Insofar, AGI32 is correct, since it solves the radiosity matrix (almost),
whereas software using Monte Carlo techniques such as Radiance are
estimation tools for complex problems. Therefore, Radiance is an excellent
tool to help lighting folks estimate the lighting distribution in complex
environments, not calculate it.

Regarding reality: Try to build a simple model with diffuse surfaces and

one diffuse source for verification purposes. You can't do it unless you use
an integrating sphere for the source. Try to make a surface black. Again,
you can't do it unless you use caves or holes lined with black material. The
effort is tremendous.

In this sense what about translucent fabrics or arbitrary BDTRFs, in

future you should consider also these materials in your comparison to be
exhaustive (**)...

No time, no money.

If you think about that, what does it means "accuracy" when you are not

able to model the majority of the real life instances and you are forced to
assume a huge amount of restriction (purely lambertian, omidirectional....)?

The majority of real life instances has characteristics that need to be

estimated, whereas simple models have characteristics that are mostly known.
That makes accurate calculation of real life instances not possible.
Otherwise, we could predict the future.

You may argue that the differences aren't so big... but as there are

differences the results are still subject to accuracy.

Do you need a program that is less precise but more general or a program

that is really precise but very specialized?

You need both, and we have both, and we use them depending on the problem.

About the comparison itself:
I still think that it would be worthy to try others radiance settings, may

be quicker even if not as accurate as yours (you use a very high value
of -ar, while you use the default -aa, not necessarily wrong or bad, just a
little unusual to me).

There is a formula to estimate the -ar value you need depending on the

scene complexity (see RWR), although I am pretty sure that you know it

If you do not want to spend a huge amount of time, you can always do the

calcs with rtrace without picture (there is an example in RWR).

These of course are only suggestions, this is your research.

Again, these parameters are not very important for this comparison. The

most important parameters for interreflection estimation are the number of
ambient bounces, and the ambient density -ad.

I agree with you (**) that an easy start consists in checking a program

with simple scenarios in order to find main limitations and bugs, I just
believe Radiance has gone further than that and it's time to define what
does it mean to validate a program;

Validation is just a comparison to known outcomes or results, isn't it? It

does not really matter if they are complex or simple scenarios. However, we
cannot validate very complex scenarios since we don't have enough accurate
sensors and data.




Electronic mail messages entering and leaving Arup business

systems are scanned for acceptability of content and viruses.


Radiance-general mailing list