I still think that you speak about differences and not error (even if you
quote the relative error formula): you are comparing Radiance with AGI, not
Radiance with Reality.
In this sense what about translucent fabrics or arbitrary BDTRFs, in future
you should consider also these materials in your comparison to be exhaustive
If you think about that, what does it means "accuracy" when you are not able
to model the majority of the real life instances and you are forced to
assume a huge amount of restriction (purely lambertian, omidirectional....)?
You may argue that the differences aren't so big... but as there are
differences the results are still subject to accuracy.
Do you need a program that is less precise but more general or a program
that is really precise but very specialized?
About the comparison itself:
I still think that it would be worthy to try others radiance settings, may
be quicker even if not as accurate as yours (you use a very high value of
-ar, while you use the default -aa, not necessarily wrong or bad, just a
little unusual to me).
There is a formula to estimate the -ar value you need depending on the scene
complexity (see RWR), although I am pretty sure that you know it already.
If you do not want to spend a huge amount of time, you can always do the
calcs with rtrace without picture (there is an example in RWR).
These of course are only suggestions, this is your research.
I agree with you (**) that an easy start consists in checking a program with
simple scenarios in order to find main limitations and bugs, I just believe
Radiance has gone further than that and it's time to define what does it
mean to validate a program;
Of course this is true only if we want to move to the next step.
thanks for your quick reply,
Electronic mail messages entering and leaving Arup business
systems are scanned for acceptability of content and viruses.