I didn't cross-post this to radiance-general...
> Georg Mischler wrote:
This just reminds me of another problem that we'll have to solve
in this context. Since Windows doesn't support NFS file locking
(and neither did cygwin, last time I looked), we'll need to find
a better solution for concurrent access to ambient files. I can
think of two portable ways to do this: Either we invent a file
based locking mechanism, or we establish a seperate server
process that accepts network store and retreival requests by the
actual simulation processes. The latter would be more technicall
involved, but probably a lot more robust. Any thoughts?
Ewww. Can't we just say that if you want to do parallel rendering, you need to install Unix?
> Autoconf...scares me. It's one of the most difficult scripting
> languages and it actively encourages #ifdef-laden code. Personally, I
> favor the Kernighan and Pike (*The Practice of Programming*) approach
> to portability; write the base code portably, and bottle up the OS
> dependencies in separate libraries and APIs.I don't think that those two approaches are mutually exclusive.
Some more complex dependencies certainly belong into seperate
modules with a thick layer of barbed wire around them. But there
are also many other small variations and bugs among different
systems with no clear borderlines between vendors, kernel and
library versions, etc. Keeping track of those without a tool like
autoconf is a real pain for both developers and users.Have you had a look at the makeall script lately? This is
complexity that the user has to handle when something goes wrong.
Autoconf is generally a one-time effort, that only needs to be
handled by one or two of the developers. Once that is done, the
trusty mantra of "./configure; make; make install" just magically
works on pretty much any system, whether its specific quirks have
been cataloged before or not. Not every user can grant Greg ssh
access to solve compile problems, and Greg probably wouldn't have
the time to do this for every user anyway.The Radiance sources are currently littered with hundreds of
instances of preprocessor symbols referencing more than a dozen
individual operating systems. This has almost worked yesterday,
it's already breaking today with very current systems, and it's
garanteed to break in the future, unless someone constantly keeps
a list of all the systems out there and their specific bugs and
other nonstandard behaviour. I will chose #ifdefs of the form
"HAS_<feature>" any time, against the alternative of multiple
nested OS specific conditionals in the same place.
OK, that's a bit of an exaggeration. I did a quick grep of the source tree, and I found 8 instances of system-dependent code in 6 files. These are typically limited to a #define or a declaration or two, and I expect that we could eliminate most of these with a little effort -- much less than it would take to change over to a HAS_<feature> sort of coding strategy. I've found that by doing some homework, it is usually possible to find a solution that works on all systems without any conditional compiles. The ones that are in there now are either necessary because there aren't any general solutions, or more likely, they could be eliminated. The least-common-denominator approach to portability is still the best in my opinion. You miss out on a few features on a few systems, but you get a more consistent result in the end (with cleaner code).
> Peter Apian-Bennewitz wrote:
> > However, that interface is not changed solely by prototyping functions.
> > Doing more than that risks new bugs- well, we'll get them out. Maybe
> > there's a core structure between just-prototypes and a full rewrite ?
>
> Hmmmm...Radiance plug-ins. Most Unices support some version of
> dynamic loading these days. Windows does. I don't think Plan 9 does...Prototypes and the elimination of global variables will make the
*internal* interfaces of Radiance a lot clearer and more obvious
than they are right now. After that, it will be much easier to
isolate those parts that need to be changed to better accomodate
any present or future extensions, and the risk of breaking all
the rest when doing so will become much smaller.
I support this suggestion whole-heartedly. The global variables currently in use in the renderers could be eliminated with a single structure and a reference to it in the ray struct. In principle, the bulk of the renderer could then be consolidated into a library, which could be multi-threaded with a bit more work....
> Now, I'm interested in ways to standardize the GUI API. In my
> opinion, it would be useful if we could customize ximage and rview to
> native OS conventions easily, perhaps by providing an OS specific
> library. It might also be useful to embed the core rendering tools in
> a dynamic loading environment. But, again, I don't know what it would
> take.You're not the first one to think that thought.
In the end it won't really take a lot of effort, but only after
the above steps have been taken. Despite all its shortcomings,
the Windows version of rview already points into the right
direction here, by demonstrating approximately where the
interfaces between the simulation core and a display framework
should be placed. Unfortunately, the existing implementation is a
horrible mess, due to the difficulties of integrating the current
Radiance code on one hand, and some other obstacles the original
developers were facing on the other. I realize that most of you
haven't seen those sources yet, so you'll simply have to take my
word for it...
I haven't seen this code, either, and from what you say, I'm not sure I want to... I thought I had defined an interface for rview pretty well in rt/driver.h. This is where I began when I wrote different drivers. (There was a little-used NeWS driver at one time, as well as one for Suntools -- anyone remember those systems?) As for ximage, this program was meant to be replaced in its entirety, not built upon. I assume that's what they did for Windows, but I don't know. The programming interface for image display is the Radiance picture format!