Radiance cross-platform issues & GUIs, oh my!

I'm curious how a system like Radiance could be fit into a set of C++
classes. There must be a way, and I'm not saying it's a bad idea,
but the general toolbox approach is standardized on a (preferably
small) set of stream data formats. I guess you would try to hook the
output of one object into the input of another, or something like
that. It seems feasible, at least in principle.

That was basically my thinking; the C++ streambuf template is a wonderful thing; one might even be able to get away with using the istrstream and ostrstream classes in many cases, preserving the existing model with very little effort. The second thought was to (in places where operations can be serialized) copy globals to class instance variables, thereby encapsulating the existing app in the class. This clearly won't work in all cases, but it might work in enough to be worth using.

Just a quick comment on your idea of implementing a mini-shell within
Radiance's scene description language -- I assume you mean this to
take care of the "!command" lines in the input, correct? In any
case, you should not underestimate the difficulties in converting
even a small number of command-line tools into library calls, as the
general assumption of separate process spaces means the use of i/o
and globals is a big mess to clean up. Personally, I can't imagine
it being worth the effort, as there would be no real gain in
functionality, and worse maintenance headaches down the line.

If it turns out to be possible, I think there would be a substantial gain in the speed of reading of scene description files because of a reduction in process creation, especially on Windows, where processes are heavyweight. Even on Unix, creating two processes for a common operation like xform seems to me a lot of overhead, especially when one of those processes is a large modern shell. I agree it would be harder to maintain--memory leaks and unbounded pointers are a pain in a large app--but not that much harder; large applications do this all the time, these days. Hopefully, it could be done with a minimum of changes to existing code. I would like to try, at least, provided I can find the time.

Taking a system like Radiance, which is based on the Unix toolbox model, and
turning it into a set of library routines, is not a weekend task. Do
you know of any examples of systems that have been successfully
converted in this way? I'd be interested to hear of any.

No, that I don't--anyone else? But it seems to me that this is at least a plausible approach, and one that preserves most of the existing code and model. I very much like the Unix toolbox model. Unfortunately, the current widely used UI technologies are hostile to that model and I would like to adapt Radiance to work with those technologies.

Anyone else have thoughts on this?

Please!

Randolph

Hi Randolph,

From a performance standpoint, the vast majority of time is spent on the ray-tracing part for most of what people do using Radiance. Hence, there's not much point in optimizing the loading of scene files or connecting up the various subordinate utilities in dynamic libraries. You simply won't save much over shelling out the commands and reading and writing the files or connecting up pipes (or whatever the Windows equivalent is). If 99% of the time is consumed by rpict or rtrace or rvu, why bother optimizing the rest, especially if it's a ton of work? You don't need dynamic libraries to create a GUI -- schorsch has done quite well using Rayfront to generate the necessary inputs and parameters for Radiance commands and running them as separate processes.

You also have to think about what kind of functionality you are trying to add with your GUI. A big reason Radiance is used in so many disciplines is thanks to the toolbox model, which allows you to combine programs in all sorts of ways the authors never intended. A GUI typically defeats this benefit, unless you follow a data flow model in your interface. Give the user a menu, take away 1000 opportunities. It makes the easy things easy, but the difficult things become impossible.

Judicious use of make and the oconv -f option ameliorates most of the pain of loading hierarchical Radiance scene descriptions, as the commands are only run the first time (or when the scene changes).

Having worked both on GUI applications (mostly Photosphere) and command-line tools, I know the programming paradigm is very different. Lots of things will get you in a monolithic application that simply were not a problem with a set of tools. Although I'm generally pretty good with memory, I don't usually free stuff in a tool when I know I'll need the memory until the process exits. What's worse, I will call exit(1) when something goes wrong, and error handling is generally much less robust in a tool environment, since individual processes are considered expendable. I have extensive error management in Photosphere compared to Radiance, and it's not something that's easy to add as an afterthought. You can play some games like "#define exit(s) my_return_jump(s)" using longjump(3), but you end up with a real mess in terms of memory leaks and the like. (I've had to do this with the JPEG library, so I know.)

-Greg

From a performance standpoint, the vast majority of time is spent on the ray-tracing part for most of what people do using Radiance. Hence, there's not much point in optimizing the loading of scene files or connecting up the various subordinate utilities in dynamic libraries. You simply won't save much over shelling out the commands and reading and writing the files or connecting up pipes (or whatever the Windows equivalent is). If 99% of the time is consumed by rpict or rtrace or rvu, why bother optimizing the rest, especially if it's a ton of work? You don't need dynamic libraries to create a GUI -- schorsch has done quite well using Rayfront to generate the necessary inputs and parameters for Radiance commands and running them as separate processes.

I haven't seen Rayfront in a few years, so I don't know where it's gone; I have seen Ecotect, and it's cranky. And you're right, of course, that to overall rendering time improving oconv doesn't matter very much, but to the user experience it does matter; people complain about oconv delays, even though they're not most of the time it takes to use Radiance.

You also have to think about what kind of functionality you are trying to add with your GUI. A big reason Radiance is used in so many disciplines is thanks to the toolbox model, which allows you to combine programs in all sorts of ways the authors never intended. A GUI typically defeats this benefit, unless you follow a data flow model in your interface. Give the user a menu, take away 1000 opportunities. It makes the easy things easy, but the difficult things become impossible.

The intention is to maintain the existing toolbox, and also have a GUI.

Judicious use of make and the oconv -f option ameliorates most of the pain of loading hierarchical Radiance scene descriptions, as the commands are only run the first time (or when the scene changes).

Mmmm...I'm aiming at a different user base.

Having worked both on GUI applications (mostly Photosphere) and command-line tools, I know the programming paradigm is very different. Lots of things will get you in a monolithic application that simply were not a problem with a set of tools. Although I'm generally pretty good with memory, I don't usually free stuff in a tool when I know I'll need the memory until the process exits. What's worse, I will call exit(1) when something goes wrong, and error handling is generally much less robust in a tool environment, since individual processes are considered expendable. I have extensive error management in Photosphere compared to Radiance, and it's not something that's easy to add as an afterthought. You can play some games like "#define exit(s) my_return_jump(s)" using longjump(3), but you end up with a real mess in terms of memory leaks and the like. (I've had to do this with the JPEG library, so I know.)

Ow. That's non-trivial. I suppose using garbage collection, and trapping the exit() calls in atexit() would resolve most of these issues, but it would have to be looked at carefully. Ideally only include it in the wrapper code, so that the existing apps would continue to work as before. Maybe the Boehm/Demers/Weiser collector? Anyone have any experience with it?

Randolph

Ref: Boehm GC, <http://www.hpl.hp.com/personal/Hans_Boehm/gc/&gt;