Hi All,
This message is most relevant to Mac users, but introduces a change that may affect rpict and rtrace performance in future releases on all platforms. Specifically, I have taken out the NICE= macro settings from the Rmakefile in src/rt, so future builds will run all Radiance programs at the standard user priority.
I made a fortuitous discovery whilst playing around with the very cool (no pun intended) TemperatureMonitor freeware <http://www.bresink.com/osx/TemperatureMonitor.html> on my new PowerMac G5 quad. It's lots of fun to watch the processor temps go up and down with load. The funny thing I noticed was that changing the Energy Savings settings had an unexpected effect on temperatures, which are in turn related to power consumption and (presumably) processor performance.
The Energy Saver options lists three processor performance settings, "Highest," "Automatic," and "Reduced." With the machine idle, there were only slight differences between the three; I had expected to see more, especially on the "Highest" setting... The biggest difference was between "Highest" and "Reduced" with all four processors busy, and that was no surprise. What was a surprise was that when I had four rpict jobs going in parallel, there was a big difference between the temperatures using the "Highest" versus "Automatic" settings. Apple documentation states that the performance penalty for using "Automatic" should be very small, so I figured the temperature change would also be small. That's when I decided to measure actual performance.
What a difference! Between the "Highest" and "Automatic" settings, I noticed a performance difference of over 60%, where I expected to see little or none. What I did expect was a performance hit moving to the "Reduced" setting, which indeed there was, but it was only another 13% worse than "Automatic." This really seemed wrong to me. That's when I decided to start playing around with the "nice" level of rpict.
The "nice" setting on a process is a way to tell Unix that a job is not a top priority, and interactive processes should be given more of the CPU if there are any running. However, in the past, it hasn't much affected the performance of a process if there are no other jobs to contend with. I put nice() calls in Radiance way back when multi-user Unix's were the rule, and we had to peacefully coexist with a lot of other people who might get annoyed if I hogged all the system resources, as Radiance is wont to do.
In short, I found out that removing the nice() system call from rpict achieved the "Highest" performance figure even when the Energy Saver processor speed was set to "Automatic." (The "Reduced" performance was not affected.) Apparently, Mac OS X 10.4 (at least) uses the process "nice" setting to decide whether or not to kick the CPU speed up a notch. This might be a new feature of the OS, as I don't have access to a maching running Jaguar or Panther to try it there.
The bottom line in all this is that if you own a Mac and you're running Radiance on it, be sure to use the "Highest" setting in the Energy Saver control panel, because if you don't, your rendering might just be running 60% slower than you would like it to. Option B is to remove the -DNICE= arguments from the build lines in src/rt/Rmakefile and remove the affected object files (rpmain.o and rtmain.o) then rerun "rmake install" in the rt directory. Or, download Saturday's HEAD release and build that. (I think I'm too late for tomorrow.) Then, you can leave the processor performance on "Automatic," which is the default for normal operation.
I wish I had made this discovery a couple of years ago....
-Greg