Radiance quality assurance suggestions

Let me here put in a plea and proposal for two particular coding practices that I believe will greatly ease the work of Radiance testing and debugging:

1. Use conditional compilation only in isolated source files, and not at all in header files.

Why? The more conditional compilation and platform-dependence there is in the code, the more test development and testing is required. Each combination of compilation conditionals creates a different version of the program. If conditional compilation is scattered throughout the code, there is a combinatoric explosion of program versions--4 variables means 16 versions, 5 means 32, and so on. If we wish to claim that we have tested Radiance we must either test for all combinations of conditionals, or specify particular conditionals which we test. If conditional compilation is used only in isolated source files, those files can be thorougly unit-tested, and we have some assurance that the whole of Radiance does not need to be thorougly tested for each possible combination of conditionals.

2. Keep most of Radiance to ISO C90.

Why? The more platform dependencies are spread through the code, the more platforms need to be tested. Moreover, platforms themselves vary internally--thus, there are multiple variants of POSIX, BSD Unix, and MS-Windows and each is a potential testing problem. The less platform-dependent code, the fewer places for the code to have unexpected bugs on different platforms.

Randolph

Randolph Fritz wrote:

Let me here put in a plea and proposal for two particular coding practices that I believe will greatly ease the work of Radiance testing and debugging:

1. Use conditional compilation only in isolated source files, and not at all in header files.

My two-cents thoughts: Not scatterig ifdefs through the code is certainly a good thing. IMHO all architecture depending ifdefs may be in one file (or files of a library, including header files), effectively providing an abstraction layer between different underlaying architectures and Radiance. At least that worked on my other projects for not time critical calls.

2. Keep most of Radiance to ISO C90.

Why? The more platform dependencies are spread through the code, the more platforms need to be tested. Moreover, platforms themselves vary internally--thus, there are multiple variants of POSIX, BSD Unix, and MS-Windows and each is a potential testing problem. The less platform-dependent code, the fewer places for the code to have unexpected bugs on different platforms.

I haven't felt many differences between UNIXes lately. And Radiance is not close to the system for a very great part of the code.

An automated test suite is certainly brilliant to have. Something like 'makeall test'. Running rtrace,rpict (rview in an automated, finite rendering time, screen-dump mode ?) and rholo in a fixed scene and comparing results with stored references is worth setting up.

-Peter

···

--
pab-opto, Freiburg, Germany, www.pab-opto.de

[Sorry this has taken so long to answer]

Randolph Fritz wrote:

Let me here put in a plea and proposal for two particular coding practices that I believe will greatly ease the work of Radiance testing and debugging:

1. Use conditional compilation only in isolated source files, and not at all in header files.

My two-cents thoughts: Not scatterig ifdefs through the code is certainly a good thing. IMHO all architecture depending ifdefs may be in one file (or files of a library, including header files), effectively providing an abstraction layer between different underlaying architectures and Radiance. At least that worked on my other projects for not time critical calls.

I'm glad we mostly agree. I want to go a bit further, though; the problem of testing multiple versions pops up if one has conditional compilation in a widely-included header file, so I think it's important not to do that.

2. Keep most of Radiance to ISO C90.

Why? The more platform dependencies are spread through the code, the more platforms need to be tested. Moreover, platforms themselves vary internally--thus, there are multiple variants of POSIX, BSD Unix, and MS-Windows and each is a potential testing problem. The less platform-dependent code, the fewer places for the code to have unexpected bugs on different platforms.

I haven't felt many differences between UNIXes lately. And Radiance is not close to the system for a very great part of the code.

There is a surprising amount of system dependency, however. (Well, it surprised me, anyway.) And there are annoying subtle differences between BSD Unix (the Mac OS X base), POSIX, and the various vendor Unices. With the G5 Macs we have a widely-available 64-bit platform. The conflicts with MS-Windows are much greater and it would be valuable to give some code a native Mac OS X interface, as well.

An automated test suite is certainly brilliant to have. Something like 'makeall test'. Running rtrace,rpict (rview in an automated, finite rendering time, screen-dump mode ?) and rholo in a fixed scene and comparing results with stored references is worth setting up.

One question that comes up is the seeding of the random number generators in that circumstance. I think it may be useful to use multiple seeds, and have a special-purpose file-comparison utility that accepts modest variations. The holodeck, on the other hand, requires interface scripting.

Randolph

···

On Tuesday, July 22, 2003, at 01:27 AM, Peter Apian-Bennewitz wrote:

Randolph Fritz wrote:

PAB wrote:

My two-cents thoughts: Not scatterig ifdefs through the code is
certainly a good thing. IMHO all architecture depending ifdefs may be
in one file (or files of a library, including header files),
effectively providing an abstraction layer between different
underlaying architectures and Radiance. At least that worked on my
other projects for not time critical calls.

I'm glad we mostly agree. I want to go a bit further, though; the
problem of testing multiple versions pops up if one has conditional
compilation in a widely-included header file, so I think it's important
not to do that.

There's only a very small number of feature based conditionals
around in Radiance, most prominently the GL stereo/nostereo stuff.
Most of the others distinguish between platforms. Since your
biggest worry with conditionals is about having to test many
seperate versions, that kind of makes those a moot point. We'll
have to run all the tests on all the seperate platforms anyway.

I haven't felt many differences between UNIXes lately. And Radiance is
not close to the system for a very great part of the code.

There is a surprising amount of system dependency, however. (Well, it
surprised me, anyway.) And there are annoying subtle differences
between BSD Unix (the Mac OS X base), POSIX, and the various vendor
Unices.

Radiance still carries code (and conditionals) from a time when
the differences between unixes were much bigger than they are
today, and we're continuously reducing that historical ballast.

Peter is right: If we stick with posix, then there's hardly any
noticeable difference between current BSD and other unix systems,
except in rare and pathological cases. In fact, in some cases
it's harder to get the code to work on more than one Windows
version than on all the unix/BSD versions from a dozen different
manufacturers.

With the G5 Macs we have a widely-available 64-bit platform.

64 bit systems are a can of worms that I'd like to open as late
as possible, after all other aspects of the code base have been
cleaned up. Once that is the case, we'll *first* have to
systematically identify all the parts of the code where the CPU
type actually makes a difference, and think hard about each
individual one.

The conflicts with MS-Windows are much greater and it would be valuable
to give some code a native Mac OS X interface, as well.

Let's keep things simple. OS X supports posix and X11, so we
don't need to provide any proprietary interfaces for it.

An automated test suite is certainly brilliant to have. Something like
'makeall test'.

How to trigger the tests is a somewhat seperate topic from
creating them first. Anyway, "makeall test" would require to
seperate the building and installing stages in the Rmakefiles.
This is something that I'm missing anyway at the moment, because
it would make testing small changes in a contained environment
much simpler.

Other than Randolph, I don't think this is a difficult change.
"Make build" will have to move all the binaries into a local
ray/bin directory, for "make install" to pick them up there.
There are two possibilities how to organize the library files.
Either we collect them into a local ray/share directory right
away (they're scattered all around the source tree right now),
or "make build" will have to do that as well.

The basis to this should be a new strategy on where Radiance
searches it's binaries and library files. We had a first stab at
this discussion already a few weeks back. I think a reasonable
strategy would go like this:

- For binaries, first look in the directory where the current
  executable was loaded from, then look in the $PATH.
- For library files, first look in ../share/ (again based on
  the location of the current executable), then in
  ../share/radiance-<ver>/, and then on $RAYPATH.

Doing this would simplify both testing within the build tree, and
installation according to LSB standards without the need to set
up any environment variables. Valid installations could then look
like this:

/opt/radiance-3.6/bin/*
/opt/radiance-3.6/share/*.(cal|pic)
/opt/radiance-3.6/man/man.*/*
/opt/radiance-3.6/doc/*.(ps|pdf)

/usr/local/bin/* # not really recommended, but possible
/usr/local/share/radiance-3.6/*.(cal|pic)
/usr/local/man/man.*/*
/usr/local/doc/*.(ps|pdf)

C:\program files\radiance-3.6\bin\*.exe
C:\program files\radiance-3.6\share\*.(cal|pic)
C:\program files\radiance-3.6\man\man.*\*
C:\program files\radiance-3.6\doc\*.(ps|pdf)

Note that "share" is used for the library files, because those
are platform independent. According to LSB (and other related
standards), "lib" is reserved for platform specific files.
To include parts of the code as (shared) libraries, we can simply
add the respective "lib" directories without any further changes.

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/

With the G5 Macs we have a widely-available 64-bit platform.

64 bit systems are a can of worms that I'd like to open as late
as possible, after all other aspects of the code base have been
cleaned up. Once that is the case, we'll *first* have to
systematically identify all the parts of the code where the CPU
type actually makes a difference, and think hard about each
individual one.

Too late, I think; it was open the minute the G5 shipped. If we're lucky, we won't have someone pop up on this list with "I compiled Radiance on my shiny new G5 with -mpowerpc64 and ...". But likely there will be someone out there, even if we don't hear.

Radiance still carries code (and conditionals) from a time when
the differences between unixes were much bigger than they are
today, and we're continuously reducing that historical ballast.

I'm all for that. But let's not make the next leg of the journey in ballast, hunh? Want to carry pay cargo, I do.

The conflicts with MS-Windows are much greater and it would be valuable
to give some code a native Mac OS X interface, as well.

Let's keep things simple. OS X supports posix and X11, so we
don't need to provide any proprietary interfaces for it.

And Cygwin (or Red Hat GNUpro, if that's what it's called now) runs on Microsoft Windows, no need to provide any proprietary interfaces for that system.

If only!

Apple's comments on this:

If you want your application to look identical on all platforms then you should make the abstraction layer thin. However, this tends to result in Mac OS X users throwing your application in a dumpster because it doesn�t feel like a Mac application. Mac OS X has common GUI style rules that most X11 applications don�t follow.

And this also applies to MS-Windows, of course.

By the way, this is from Apple's portability guide, at <http://developer.apple.com/documentation/Porting/Conceptual/PortingUnix/portability/chapter_3_section_1.html&gt;\. I recommend that document.

Peter is right: If we stick with posix, then there's hardly any
noticeable difference between current BSD and other unix systems,
except in rare and pathological cases. In fact, in some cases
it's harder to get the code to work on more than one Windows
version than on all the unix/BSD versions from a dozen different
manufacturers.

In fact, Mac OS X does not support POSIX (I looked it up two days ago--I was hoping it did.) It *provides* some POSIX system calls and programs. Apple makes no promises to *support* POSIX at all (except POSIX threads, go figure). That means no POSIX certification testing and therefore subtle differences that will lead to bugs.

Ironically, Windows POSIX.1 capabilities are certified. It may be that #define'ing "POSIX_SOURCE" to 1 in the compiler will allow the non-X portions of Radiance compile without change on Windows.

I see the following likely platforms:

   POSIX (Sun, IBM, DEC Alpha/Compaq), Mac OS X, various Linux, various BSD, SGI
   Plan 9 ANSI/POSIX Emulation
   various MS-Windows versions
   Special-purpose servers (a holodeck server is a likely possibility)

Which to test? Wihtout a user survey, I'd guess the most-used platforms will be Mac OS X, Red Hat and SUSE Linux, and Windows NT and XP. SGI, if we can find an SGI system to test on--later SGIs are 64-bit, btw. POSIX, you will notice, isn't even the list--one doesn't buy Sun and IBM systems for rendering, usually, and the Alpha (another 64-bit platform) has been let to go obsolete by Intel. If someone steps forward and offers to do another platform, I'd be all for signing them up.

There's only a very small number of feature based conditionals
around in Radiance, most prominently the GL stereo/nostereo stuff.
Most of the others distinguish between platforms. Since your
biggest worry with conditionals is about having to test many
seperate versions, that kind of makes those a moot point. We'll
have to run all the tests on all the seperate platforms anyway.

The more of the code is platform-independent, however, the more tests will apply across platforms, providing additional testing for free. Conversely, the more platform dependence (and any file which includes a platform-dependent header is platform dependent) the more possibilities for platform-dependent bugs.

I am not too much concerned with Radiance feature variants, as long as the number is modest and the conditionals manageable; my concern is with the kind of fine-grained platform dependence that the GNU build utilities enable. There are so many variants of such software that most of the configurations it is used in are never tested by the development team at all--much of the testing work falls on the end users or integrators further on in the development process.

Other than Randolph, I don't think this is a difficult change.
"Make build" will have to move all the binaries into a local
ray/bin directory, for "make install" to pick them up there.
There are two possibilities how to organize the library files.
Either we collect them into a local ray/share directory right
away (they're scattered all around the source tree right now),
or "make build" will have to do that as well.

Mmmm...common practice is to simply build in the source directory, then find and move the files for install. But practice varies.

A much more serious problem to my mind is the problem of conditional build--as with the OpenGL support, which I have been wrestling with. One of the biggest porting problems I am aware of is the variant versions of "make" and associated utilities. BSD, POSIX, and GNU all have very different versions of make (doesn't Windows have one, too?), and the only things one can count on--still!--are the features of the 25-year old Unix v7 make. Basic make doesn't provide any conditional building or automatic dependency tracking at all (find the .h files referenced by .c files.) BSD Unix and, usually, Linux, has "mkdep" which can be hacked into providing dependency tracking--it is not part of POSIX--but it's pretty much of a kludge. I have hacked out a GNU makefile for the "common" subdirectory, which allows for conditional building of the OpenGL component; I can find no way to do it on v7 make without changes to the controlling script, and I don't know about BSD or POSIX make--GNU make was quite enough trouble for me.

All of which leads me to think we need more work on the controlling scripts, a decision to use one make (probably GNU) or an improved build tool. I have a friend who thinks well of SCONS <http://www.scons.org/&gt;, but that's a big shift.

How to trigger the tests is a somewhat seperate topic from
creating them first. Anyway, "makeall test" would require to
seperate the building and installing stages in the Rmakefiles.
This is something that I'm missing anyway at the moment, because
it would make testing small changes in a contained environment
much simpler.

Me too. I'm really not testing adequately, and that worries me muchly--field test is *not* the time to find out about all the problems I introduce.

The basis to this should be a new strategy on where Radiance
searches it's binaries and library files. We had a first stab at
this discussion already a few weeks back. I think a reasonable
strategy would go like this:

- For binaries, first look in the directory where the current
  executable was loaded from, then look in the $PATH.
- For library files, first look in ../share/ (again based on
  the location of the current executable), then in
  ../share/radiance-<ver>/, and then on $RAYPATH.

How does this work on Mac OS X and MS-Windows?

Thanks for the long & thoughtful reply.

Randolph

Randolph Fritz wrote:

64 bit systems are a can of worms that I'd like to open as late
as possible, ...

Too late, I think; it was open the minute the G5 shipped. If we're
lucky, we won't have someone pop up on this list with "I compiled
Radiance on my shiny new G5 with -mpowerpc64 and ...".

That's why we provide default compile settings in makeall.
As far as I'm concerned (until Greg convinces me otherwise),
Radiance is not 64 bit safe. Of course, that doesn't mean it
won't work on 64 bit system when compiled in 32 bit mode. People
have been successfully using it that way for quite some time
already.

Radiance still carries code (and conditionals) from a time when
the differences between unixes were much bigger than they are
today, and we're continuously reducing that historical ballast.

I'm all for that. But let's not make the next leg of the journey in
ballast, hunh? Want to carry pay cargo, I do.

My goal for this leg of the journey is to make the code (minus
the GUI stuff for the moment) first compile and then run
correctly on Windows. I think that's cargo enough. All other
changes I make are just serving that one goal, even if some of
the details may be determined by other long-term considerations.

Let's keep things simple. OS X supports posix and X11, so we
don't need to provide any proprietary interfaces for it.

And Cygwin (or Red Hat GNUpro, if that's what it's called now) runs on
Microsoft Windows, no need to provide any proprietary interfaces for
that system.

If only!

I prefer a less polemic and more practical approach. Fact is,
that all systems we're currently try to *directly* support have
those functions from posix that we need, and that can be mapped
directly to the semantics of the underlying system. And the few
exceptions (eg. process control on Windows) are already being
addressed.

Apple's comments on this:

...

And this also applies to MS-Windows, of course.

Now you're switching from discussing the fundamental OS APIs (as
used by roughly 100 executables in Radiance) to discussing the
look-and-feel options of the GUI (as used by 3 executables in
Radiance).

No sane person will refuse to use Radiance on OS X just because
rview doesn't have shiny transparent buttons there. But of
course, if you want to write portable replacements that use the
native widget set on each platform, you're very welcome! :wink:

If I had the time (or money, sponsoring gladly accepted) for
stuff like that, then I'd turn the rendering engine into a
library, wrap that library as a Python extension, and write the
GUI stuff in Python/WxPython. WxPython (resp. the underlying
WxWindows) will use gtk on unix, MFC on Windows, and one of the
available native widget sets on OS X. A much less powerful (but
considerably more mature) alternative to WxPython would be Tk.

In fact, Mac OS X does not support POSIX (I looked it up two days
ago--I was hoping it did.) It *provides* some POSIX system calls and
programs.

And that's exactly everything we ask for. We don't need certified
support on paper. We need working functionality in practise.

I see the following likely platforms:

   POSIX (Sun, IBM, DEC Alpha/Compaq), Mac OS X, various Linux, various
BSD, SGI

People are already working with Radiance on all those systems,
and then some.

   Plan 9 ANSI/POSIX Emulation

Doubtful, and I currently see no reason to care about that one
(though I won't complain if it happens to work anyway).

   various MS-Windows versions

See above (my second point). I'll restrict explicit support to NT
based systems though, because Win95/98/ME are technically
inadequate for our purposes anyway.

   Special-purpose servers (a holodeck server is a likely possibility)

Have fun... :wink:

Which to test? Wihtout a user survey, I'd guess the most-used
platforms will be...

The daily HEAD dump is available for everyone who wants to do
beta testing on the platform of their choice. As long as they can
provide meaningful test results and bug reports, there's no need
for Greg or me to personally run the full test suite on every
platform imagineable.

  POSIX, you will notice, isn't even the list

Of course not. Posix is not targeted at end users. It's simply a
useful abstraction for developers, which tells them that certain
functions can be expected to be present on many systems.

We'll
have to run all the tests on all the seperate platforms anyway.

The more of the code is platform-independent, however, the more tests
will apply across platforms, providing additional testing for free.

What happened to your usual paranoia here? :wink:
No test applies across platforms. Necessarily, our tests will
also have to verify that the underlying OS functionality is
working as expected. Unfortunately, even certified posix (or
whatever) implementations have bugs.

"Make build" will have to move all the binaries into a local
ray/bin directory, for "make install" to pick them up there.
There are two possibilities how to organize the library files.
Either we collect them into a local ray/share directory right
away (they're scattered all around the source tree right now),
or "make build" will have to do that as well.

Mmmm...common practice is to simply build in the source directory, then
find and move the files for install. But practice varies.

This is indeed common practise in those cases where all the
resulting executables are built in one directory anyway. In cases
like Radiance, where executables are built in many different
subdirectories, it is common practise for the build process to
collect them in one place for local testing. In excessive cases,
the object files and executables may even be placed in a seperate
subdirectory for each supported platform, which makes it possible
to build and test several versions from the same tree over NFS
(even concurrently) without having to "make clean" in between.

A much more serious problem to my mind is the problem of conditional
build--as with the OpenGL support, which I have been wrestling with.

You're trying to do things with make that it wasn't designed to
do. Just leave the "make ogl" in place as it currently is and
delegate the decisions to the makeall script. That's the main
reason we even have this script, after all.

I have a friend who thinks well of SCONS
<http://www.scons.org/&gt;, but that's a big shift.

I have it installed here, but haven't found the time to play
around with it yet. It looks like the most brilliant approach to
the software build process I have seen so far. Unfortunately, it
has one big disadvantage, at least while we haven't convinced
Greg otherwise: It requires that a working and relatively recent
copy of Python is already installed on the build system.

I actually thought of establishing SCons as a secondary build
method in parallel to make, for those who have Python or want to
compile on exotic platforms that aren't included in makeall yet.

- For binaries, first look in the directory where the current
  executable was loaded from, then look in the $PATH.
- For library files, first look in ../share/ (again based on
  the location of the current executable), then in
  ../share/radiance-<ver>/, and then on $RAYPATH.

How does this work on Mac OS X and MS-Windows?

As a strategy, this will obviously work on any OS that uses
hierarchical file system semantics.

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/

Georg Mischler wrote:

Randolph Fritz wrote:

...

  POSIX (Sun, IBM, DEC Alpha/Compaq), Mac OS X, various Linux, various
BSD, SGI
   
People are already working with Radiance on all those systems,
and then some.

...

Randolph, I don't want to discourage your thrust and enthusiasm to much, - however I agree with most of Schorsch's points.
A pragmatic approach wants to see Radiance compiling and running on MS Windows XP/2000 and UNIXes (MacOS and Linux being top, with the rest of SUNOS,IRIX,HPUX,AIX a minority). MS is being cared for by Schorsch. People have packaged Debian et al distributions for Linux end-users, MacOS is being cared for by Greg himself. Someone with a SUN has either compile experience him/herself or an admin closeby. dito IRIX/HPUX/AIX. The makeall script is not the premium solution, but it works.
-Peter

···

--
pab-opto, Freiburg, Germany, www.pab-opto.de

MS is being cared for by Schorsch. People have packaged Debian et al distributions for Linux end-users, MacOS is being cared for by Greg himself. Someone with a SUN has either compile experience him/herself or an admin closeby. dito IRIX/HPUX/AIX.

I agree with you on these. In fact, I agreed with you in the original message. Hunh?

The makeall script is not the premium solution, but it works.

The problems I see are:

1. There is no way to write portable Rmakefiles which separate build and install and work with the current makeall script.

2. There is no platform-independent way to automatically include header-file dependencies in Rmakefile's.

Do you see any way around these? Or regard them as minor?

Randolph

···

On Friday, August 1, 2003, at 08:52 AM, Peter Apian-Bennewitz wrote:

Randolph Fritz wrote:

Peter Apian-Bennewitz wrote:

> The makeall script is not the premium solution, but it works.

The problems I see are:

1. There is no way to write portable Rmakefiles which separate build
and install and work with the current makeall script.

Since the makeall script has to trigger the two seperate
procedures individually, it obviously needs to be adapted to know
about them. Incidently, Greg has already created a seperate
"installib" module some time ago. It probably wouldn't hurt to
continue into that direction and modularize makeall even further.
Creating seperate files with the default settings for each known
platform might be a good start, as it would make it much easier
to add new platforms in the future.

2. There is no platform-independent way to automatically include
header-file dependencies in Rmakefile's.

Since we include the same set of headers on all platforms
that use the makeall/Rmakefile combination, this shouldn't be a
problem. The dependencies only need to be generated when they
actually change, which means on Gregs system or mine. There
should be no need for anyone else to bother with that.

That implies that we'll only include dependencies to our own
headers. A platform independent procedure would only be necessary
if we were to generate dependencies for system headers as well,
which I consider bad practise. The development environment should
take care of those without any intervention from our side.

Actually, I'm not sure if the dependency generation really needs
to be automatic, although it would certainly be nice.

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/

Randolph Fritz wrote:

Peter Apian-Bennewitz wrote:

The makeall script is not the premium solution, but it works.

The problems I see are:

1. There is no way to write portable Rmakefiles which separate build
and install and work with the current makeall script.

Since the makeall script has to trigger the two seperate
procedures individually, it obviously needs to be adapted to know
about them. [...]

That strikes me as an awkward way to reimplement something that's already been done. And of course as the scripts get more and more complex, they will require more and more maintenance.

2. There is no platform-independent way to automatically include
header-file dependencies in Rmakefile's.

Since we include the same set of headers on all platforms
that use the makeall/Rmakefile combination, this shouldn't be a
problem. The dependencies only need to be generated when they
actually change, which means on Gregs system or mine. There
should be no need for anyone else to bother with that.

Wouldn't all developers potentially need them? And we don't currently have all of those dependencies explicit at all, except where we've kludged them in. (By the way, there is no support for dynamic inclusion of files in basic make.)

That implies that we'll only include dependencies to our own
headers. A platform independent procedure would only be necessary
if we were to generate dependencies for system headers as well,
which I consider bad practise.

I agree.

Actually, I'm not sure if the dependency generation really needs
to be automatic, although it would certainly be nice.

"make" itself is not strictly necessary, but I am long enough in computing to remember what builds were like without it. If they aren't too much trouble, I'd rather use the best simple automatic tools I can find. The build scripts are going to get more and more complicated and grow to include dependency-generation scripts. They will take maintenance time themselves.

In terms of Scons (which I haven't actually tried yet), well, is requiring Python 1.5.2 for build (not execution) such a terrible thing? 1.5.2 is pretty old now, and it runs on every platform Radiance runs on, as far as I know. It is even preinstalled on Mac OS X and most Linuxen. Greg?

Randolph

···

On Saturday, August 2, 2003, at 04:48 AM, Georg Mischler wrote:

Randolph Fritz wrote:
....

Actually, I'm not sure if the dependency generation really needs
to be automatic, although it would certainly be nice.

"make" itself is not strictly necessary, but I am long enough in computing to remember what builds were like without it. If they aren't too much trouble, I'd rather use the best simple automatic tools I can find. The build scripts are going to get more and more complicated and grow to include dependency-generation scripts. They will take maintenance time themselves.

In terms of Scons* (which I haven't actually tried yet),* well, is requiring Python 1.5.2 for build (not execution) such a terrible thing? 1.5.2 is pretty old now, and it runs on every platform Radiance runs on, as far as I know. It is even preinstalled on Mac OS X and most Linuxen. Greg?

Wouldn't it be good if you gain some solid experience with something new first, /before/ going to lengthly theoretical arguments ?
P.

···

--
pab-opto, Freiburg, Germany, www.pab-opto.de

We could write some own scripts that automatically generate the
makefiles, we could call it automake or autoconf or so :wink:

OK just kidding, but i think ppl should do what they are best in , the
radiance ppl are best in writing code that does
rendering/raytracing/etc. Let other ppl make the gui toolkits like
Gtk/Qt/Motif and yet others configuration tools like autoconf etc.

It always seems that writing yer own keeps things simpler in the
beginning until you project grows and than you hit the same problems
other ppl already had and solved.

Another advantage of using wel known tools is that ppl have written
books about them , there are loads ppl that know those tools, and they
are kept up to date.

Just my 2 cents, which i know are going to be ignored cause making own
things is cool.

- Erwin

···

On Sat, 2003-08-02 at 21:11, Randolph Fritz wrote:

On Saturday, August 2, 2003, at 04:48 AM, Georg Mischler wrote:

> Randolph Fritz wrote:
>
>> Peter Apian-Bennewitz wrote:
>>
>>> The makeall script is not the premium solution, but it works.
>>
>> The problems I see are:
>>
>> 1. There is no way to write portable Rmakefiles which separate build
>> and install and work with the current makeall script.
>
> Since the makeall script has to trigger the two seperate
> procedures individually, it obviously needs to be adapted to know
> about them. [...]

That strikes me as an awkward way to reimplement something that's
already been done. And of course as the scripts get more and more
complex, they will require more and more maintenance.

>> 2. There is no platform-independent way to automatically include
>> header-file dependencies in Rmakefile's.
>
> Since we include the same set of headers on all platforms
> that use the makeall/Rmakefile combination, this shouldn't be a
> problem. The dependencies only need to be generated when they
> actually change, which means on Gregs system or mine. There
> should be no need for anyone else to bother with that.

Wouldn't all developers potentially need them? And we don't currently
have all of those dependencies explicit at all, except where we've
kludged them in. (By the way, there is no support for dynamic
inclusion of files in basic make.)

> That implies that we'll only include dependencies to our own
> headers. A platform independent procedure would only be necessary
> if we were to generate dependencies for system headers as well,
> which I consider bad practise.

I agree.

> Actually, I'm not sure if the dependency generation really needs
> to be automatic, although it would certainly be nice.

"make" itself is not strictly necessary, but I am long enough in
computing to remember what builds were like without it. If they aren't
too much trouble, I'd rather use the best simple automatic tools I can
find. The build scripts are going to get more and more complicated and
grow to include dependency-generation scripts. They will take
maintenance time themselves.

In terms of Scons (which I haven't actually tried yet), well, is
requiring Python 1.5.2 for build (not execution) such a terrible thing?
  1.5.2 is pretty old now, and it runs on every platform Radiance runs
on, as far as I know. It is even preinstalled on Mac OS X and most
Linuxen. Greg?

Randolph

_______________________________________________
Radiance-dev mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-dev

From: Georg Mischler <[email protected]>
Date: Mon Jul 28, 2003 6:18:22 AM US/Pacific

The basis to this should be a new strategy on where Radiance
searches it's binaries and library files. We had a first stab at
this discussion already a few weeks back. I think a reasonable
strategy would go like this:

- For binaries, first look in the directory where the current
  executable was loaded from, then look in the $PATH.
- For library files, first look in ../share/ (again based on
  the location of the current executable), then in
  ../share/radiance-<ver>/, and then on $RAYPATH.

Is there some way within a Unix program to determine where the executable was found, other than researching the $PATH environment variable to find where argv[0] lives?

1. Calcomp.h and otypes.h have a pointer to an unprototype-able
function in their definitions. I haven't studied these enough to have
formed ideas on how to approach the problem, though I've discarded
several early approaches. Would it be all right to bring this up on
the radiance-dev list?

I brought those two up last time, since I assumed they are
somehow related. I see that the functions involved actually all
are X(void), but I'd like more input about what actually happens
there before changing the prototypes. Greg?

The function pointer declared in otypes.h cannot be easily prototyped, as I'm using this pointer with different parameters depending on whether it's being called by oconv or the renderers. I'd rather leave it this way.

The progotype for the eoper dispatch array in calcomp.h was easy to prototype on the other hand, and I have done so. I could do the same for the library function pointer in the LIBR struct, except that it is assigned in so many places that it would be a bit of a hassle to track them all down. This pointer takes a single char * argument, which is usually neither defined nor examined by callers. This makes it a big hassle to include as a prototype, with little potential benefit to us as developers. I'd leave this as is, also.

From: Randolph Fritz <[email protected]>
Date: Fri Aug 1, 2003 11:20:50 PM US/Pacific
Subject: Re: [Radiance-dev] Radiance quality assurance suggestions

MS is being cared for by Schorsch. People have packaged Debian et al distributions for Linux end-users, MacOS is being cared for by Greg himself. Someone with a SUN has either compile experience him/herself or an admin closeby. dito IRIX/HPUX/AIX.

I agree with you on these. In fact, I agreed with you in the original message. Hunh?

The makeall script is not the premium solution, but it works.

The problems I see are:

1. There is no way to write portable Rmakefiles which separate build and install and work with the current makeall script.

2. There is no platform-independent way to automatically include header-file dependencies in Rmakefile's.

Do you see any way around these? Or regard them as minor?

Randolph

I really consider these to be minor problems, though it would be nice to build without installing. The current rmake sans arguments builds in the working directory without copying the executables anywhere. We could do as Schorsch suggested and copy these to one location for a "build" option and from there to the standard location(s) for the "install" option. What I usually do is hand-tweak a version of rmake (which I rename "dmake") that sets LIBDIR and INSTDIR to some other location, as well as replacing the -O option with -g for debugging. This works, also.

As for header files, it's not difficult to keep the Rmakefile's up to date so long as you pay attention to what you're doing as you're doing it. It would be nice to have an automatic way to create the dependencies, but mkdep is not fool-proof either, as it's easy to forget to include all the relevant files that are in other directories (mostly ../common) when you run it.

-Greg

···

On Friday, August 1, 2003, at 08:52 AM, Peter Apian-Bennewitz wrote:

Is there some way within a Unix program to determine where the
executable was found, other than researching the $PATH environment
variable to find where argv[0] lives?

I do not believe so, except in the Mac OS X Cocoa environment. Even
argv[0] is not certain, since it is only convention that it contain
the program name that was actually used. System utilities usually
resolve this with a file in /etc, but I don't like that solution for
Radiance.

>
>>1. Calcomp.h and otypes.h have a pointer to an unprototype-able
>>function in their definitions. I haven't studied these enough to have
>>formed ideas on how to approach the problem, though I've discarded
>>several early approaches. Would it be all right to bring this up on
>>the radiance-dev list?
>
>I brought those two up last time, since I assumed they are
>somehow related. I see that the functions involved actually all
>are X(void), but I'd like more input about what actually happens
>there before changing the prototypes. Greg?

The function pointer declared in otypes.h cannot be easily prototyped,
as I'm using this pointer with different parameters depending on
whether it's being called by oconv or the renderers. I'd rather leave
it this way.

I would, too. But it's not mainstream C any more, and it won't work
at all in C++, where that construct is equivalent to (*f)(void). I do
wish ISO C allowed (*f)(...), but it does not. As I said, I have some
thoughts on this that I want to explore futher.

The progotype for the eoper dispatch array in calcomp.h was easy to
prototype on the other hand, and I have done so.

Great!

I could do the same for the library function pointer in the LIBR
struct, except that it is assigned in so many places that it would
be a bit of a hassle to track them all down. This pointer takes a
single char * argument, which is usually neither defined nor
examined by callers. This makes it a big hassle to include as a
prototype, with little potential benefit to us as developers. I'd
leave this as is, also.

Perhaps just change it to void *, eventually? That seems to me in
line with your intentions. The compiler can locate the references.

I really consider these to be minor problems, though it would be nice
to build without installing. The current rmake sans arguments builds
in the working directory without copying the executables anywhere.

I've seen that some things don't get built without "install"--the
OpenGL files in common are one example, but I think I remember othes.
Trying to make it so that all things got built without install led to
my deciding that the problem is non-trivial--I can do it with csh and
make, but I can't do it simply and portably.

As for header files, it's not difficult to keep the Rmakefile's up to
date so long as you pay attention to what you're doing as you're doing
it. It would be nice to have an automatic way to create the
dependencies, but mkdep is not fool-proof either, as it's easy to
forget to include all the relevant files that are in other directories
(mostly ../common) when you run it.

scons, which I've been playing with a bit, does all this. And it
seems to be clear. But I'm still learning how to use it. More on
this when I know more.

Randolph

···

On Mon, Aug 04, 2003 at 12:19:58PM -0700, Greg Ward wrote:

From: Randolph Fritz <[email protected]>
Date: Mon Aug 4, 2003 1:26:57 PM US/Pacific

The function pointer declared in otypes.h cannot be easily prototyped,
as I'm using this pointer with different parameters depending on
whether it's being called by oconv or the renderers. I'd rather leave
it this way.

I would, too. But it's not mainstream C any more, and it won't work
at all in C++, where that construct is equivalent to (*f)(void). I do
wish ISO C allowed (*f)(...), but it does not. As I said, I have some
thoughts on this that I want to explore futher.

This pointer carries out different functions depending on the application and primitive type. In the case of oconv and surfaces, it determines voxel intersection. In the case of rendering and surfaces, it computes ray intersection. In the case of rendering and materials, it computes shading. In the case of rendering and patterns or textures, it modifies the pattern/texture for a ray. The way it was designed, one signature works for oconv and another for the renderers. We could simply have two function pointers where there is now one, like so:

typedef struct {
         char *funame; /* function name */
         int flags; /* type flags */
         int (*funv)(OBJREC *o, CUBE *c); /* voxel intersection */
         int (*funr)(OBJREC *o, RAY *r); /* ray evaluation */
} FUN;

In oconv, the funr pointers will all be assigned NULL, and likewise for the funv pointers in rpict/rview/rtrace. It wastes a pointer, but that's hardly a concern these days. The other option is to stick with a single signature and use a cast. This is less objectionable than usual because it is like "client data" in most respects:

typedef struct {
         char *funame; /* function name */
         int flags; /* type flags */
         int (*funp)(OBJREC *o, void *p); /* pointer to function */
} FUN;

The progotype for the eoper dispatch array in calcomp.h was easy to
prototype on the other hand, and I have done so.

Great!

I could do the same for the library function pointer in the LIBR
struct, except that it is assigned in so many places that it would
be a bit of a hassle to track them all down. This pointer takes a
single char * argument, which is usually neither defined nor
examined by callers. This makes it a big hassle to include as a
prototype, with little potential benefit to us as developers. I'd
leave this as is, also.

Perhaps just change it to void *, eventually? That seems to me in
line with your intentions. The compiler can locate the references.

Actually, it really is a char * in this case, as I'm passing the name of the called function so that it may discovers the user's intent in the case of a single function with variants. As I said, it's not really a problem so much as a pain, but probably on the order of 5 minutes of pain, which is less than it took me to write this e-mail.

-Greg

Greg Ward wrote:

> - For binaries, first look in the directory where the current
> executable was loaded from, then look in the $PATH.
> - For library files, first look in ../share/ (again based on
> the location of the current executable), then in
> ../share/radiance-<ver>/, and then on $RAYPATH.

Is there some way within a Unix program to determine where the
executable was found, other than researching the $PATH environment
variable to find where argv[0] lives?

Scanning the PATH is the only truly portable solution I am aware
of. Doing this appears to be quite common practise.
On some systems though, you may find that information somewhere
under /proc. On Linux, /proc/<pid>/exe always is a symlink to the
program file.
Newer Windows versions have a system call to the same effect.

  The current rmake sans arguments builds
in the working directory without copying the executables anywhere.

Right now, I think it doesn't actually build everything, because
at least some of the libaries don't get copied to ../lib, which
causes other stuff to fail later.

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/

Greg Ward wrote:

  The way it was
designed, one signature works for oconv and another for the renderers.
...

  The other option is to stick with
a single signature and use a cast. This is less objectionable than
usual because it is like "client data" in most respects:

typedef struct {
    char *funame; /* function name */
    int flags; /* type flags */
    int (*funp)(OBJREC *o, void *p); /* pointer to function */
} FUN;

If each program is garanteed to always use the same signature,
then that looks acceptable to me as an "opaque" data structure.
I'm not even sure if it will require any casts.

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/

On Mac OS X, however, it's probably best to eventually (not immediately) adopt the practice of packaging Radiance as a Mac OS X application, and use ~/Library/Radiance and /Library/Radiance for add-on libraries. Mac OS X isn't--quite--Unix.

Randolph

···

On Monday, August 4, 2003, at 12:19 PM, Greg Ward wrote:

- For binaries, first look in the directory where the current
  executable was loaded from, then look in the $PATH.
- For library files, first look in ../share/ (again based on
  the location of the current executable), then in
  ../share/radiance-<ver>/, and then on $RAYPATH.

Is there some way within a Unix program to determine where the executable was found, other than researching the $PATH environment variable to find where argv[0] lives?

Scanning the PATH is the only truly portable solution I am aware
of. Doing this appears to be quite common practise.
On some systems though, you may find that information somewhere
under /proc. On Linux, /proc/<pid>/exe always is a symlink to the
program file.

[These have been sitting in my "Drafts" mailbox for a while; I've decided to send them out. I'm still too busy to *do* anything at this point, however.--R.]

We'll
have to run all the tests on all the seperate platforms anyway.

The more of the code is platform-independent, however, the more tests
will apply across platforms, providing additional testing for free.

What happened to your usual paranoia here? :wink:
No test applies across platforms.

No, no. If one tests a file without platform dependencies on two platforms, the same code is tested twice in different environments. Testing a file with platform dependencies on two platforms tests different code, once, on each platform. One gets better from testing the same code twice, you see?

I prefer a less polemic and more practical approach. Fact is,
that all systems we're currently try to *directly* support have
those functions from posix that we need, and that can be mapped
directly to the semantics of the underlying system. And the few
exceptions (eg. process control on Windows) are already being
addressed.

The question at the heart of QA is: how do we know this fact? *Are* the functions really all the same? We have not studied and tested each system call in Radiance. There is not even, yet, a test suite, and no test suite will ever cover all cases. That none of the systems we are most interested in are POSIX-certified is a strong indication that we don't, in fact, know this for a fact at all. We only hope. If we did structural engineering like this, in Weinberg's phrase, "the first woodpecker that came along would destroy civilization."

Randolph Fritz wrote:

64 bit systems are a can of worms that I'd like to open as late
as possible, ...

Too late, I think; it was open the minute the G5 shipped. If we're
lucky, we won't have someone pop up on this list with "I compiled
Radiance on my shiny new G5 with -mpowerpc64 and ...".

That's why we provide default compile settings in makeall.
As far as I'm concerned (until Greg convinces me otherwise),
Radiance is not 64 bit safe. Of course, that doesn't mean it
won't work on 64 bit system when compiled in 32 bit mode. People
have been successfully using it that way for quite some time
already.

Fair enough.

Radiance still carries code (and conditionals) from a time when
the differences between unixes were much bigger than they are
today, and we're continuously reducing that historical ballast.

I'm all for that. But let's not make the next leg of the journey in
ballast, hunh? Want to carry pay cargo, I do.

My goal for this leg of the journey is to make the code (minus
the GUI stuff for the moment) first compile and then run
correctly on Windows. I think that's cargo enough. All other
changes I make are just serving that one goal, even if some of
the details may be determined by other long-term considerations.

I'm sorry--I didn't understand. Yes, that makes sense. Platform-independent stuff first.

Apple's comments on this:

...

And this also applies to MS-Windows, of course.

Now you're switching from discussing the fundamental OS APIs (as
used by roughly 100 executables in Radiance) to discussing the
look-and-feel options of the GUI (as used by 3 executables in
Radiance).

rview, rholo, and ximage. ... and the plotting stuff. ...and perhaps others.

No sane person will refuse to use Radiance on OS X just because
rview doesn't have shiny transparent buttons there. But of
course, if you want to write portable replacements that use the
native widget set on each platform, you're very welcome! :wink:

In fact, people refuse to use apps because they don't look native, on both MS-Windows and Macintosh systems, every day; that quote from Apple is based on experience. Lots of not-sane people, I guess. On the other hand, Radiance is designed to accommodate multiple GUIs, so perhaps it isn't so hard, provided we can get the code base to a point where it will interface cleanly with C++ and Objective C.

[Lots more stuff I cd. say & no time to say it.--R]

Randolph

···

On Friday, August 1, 2003, at 04:27 AM, Georg Mischler wrote: