testing framework (was: Deterministic calc...)

Greg Ward wrote:

Yes, I could see doing this. We would of course be accepting
differences between versions and machines to some tolerance, but it
should be sufficient for spotting any major regressions in the code or
between releases.

The Python based testing framework that is already in CVS takes
this into account, and allows to check whether any output values
are within a certain range (in contrast to checking for exact
values). If people can come up with good test cases, I'd be very
happy to add them to the official test suite.

My previous calls for test case contributions have mostly gone
unanswered, so I'm happy to hear that peole are now starting to
understand the importance of this tool.

Don't worry about the testing and comparison machinery if you're
not familiar with Python. Just give me the test data, the
commands to perform, and the desired results, and I'll implement
the rest. There are a handful of tests already in place, but so
far those mainly serve to demonstrate that the framework actually
works as expected.

At the moment, the following programs get tested more or less
comprehensively: cnt, ev, genbeads, getinfo, histo, rlam, neaten,
ttyimage, and xform. I just updated the tests to take the new
executable names (lam->rlam, neat->neaten) into account, the
changes will be in the nightly HEAD dump tomorrow.

You can run the tests by executing the Python script
ray/test/run_all.py. The test suite assumes that Radiance was
built with scons. Check the exact requirements in the file
ray/test/README.txt.

-schorsch

ยทยทยท

--
Georg Mischler -- simulations developer -- schorsch at schorsch com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/