I want to respond to this thread while it's still warm, because Carsten made a number of excellent points, which deserve special attention. I wish I weren't so preoccupied these days with other projects, but I wouldn't have anything to eat otherwise, so I can't complain....
Out of laziness, I'm going to address these points inline.
From: [email protected] (Carsten Bauer)
Date: December 5, 2003 5:45:11 AM PST
the recent photon-map thread left me - once more - rather puzzled. OK, I haven't any active Radiance-projects going on right now, so I look at the matter from a little distance. But from a distant point you often see things better Furthermore, I myself have developed a (small) Radiance-addon, and I'm one of the (seemingly still few) people who have already used the photon-map, so I feel the need to add my statements to this discussion .
Concerning the photon-map istelf, I cannot say that much, as I only used it once (reason: see above) for a Visualization case ( a rather challenging one). I patched the code in, made some tests to get aquainted with the option settings and then let it loose on the image. And- it simply worked ! (How boring ....), produced what I wanted to have and what was to be expected by physical reasoning. Hey, isn't that what a tool should do??? There are some drawbacks, one e.g. is the awkward handling, having to produce an extra pmap doesn't sound much but nevertheless is inconvenient. It should go somehow automatically, by setting some controlling options -Px xxx -Pf filename etc etc.. in a .rif file for rad.
The other is not really one, of course one has to learn a bit of new stuff (what means 1 million photons for the current scene, shall I mix 50 or 100 photons in the gathering process, etc.) but these things are more intuitive than some other classic options like -ar and -aa. From my experience in developing the direct cache I know that its not easy to translate the limitations of the machine into parameters which are easy to understand, covering all cases, being only a few etc
So far for this point. Now to the discussion. In some days time we have 2004, not 1984. I've dug deep into the Radiance code and learned a lot by it, but no matter how sophisticated it once was/is, the outside world has moved on. Now we have new tools and concepts (pmap is only one example), which basically make use of the fact that todays machines have far more memory (and higher speed, of course), but - many apologies for the harsh word - instead of integrating them a lot of pea-counting is going on in the discussion.
Funny enough, the first lines of Radiance were entered in the fall of 1985, so I guess it's turning 20 in a year's time.... One of the big reasons that Radiance is still relevant while the rest of the world has "moved on" is that as computers have gotten faster with more memory, it still has the basic efficiency of a system that was designed to run reasonably well on a slow processor with a few megabytes. This means that when it's set loose on today's multiprocessor machines with a gig (or two) of RAM, there's almost no limit to what you can do with it -- and all I had to do for this was allow for really large arrays! (The block sizes and so forth have grown incrementally as the years have gone by.) Since physical simulation will always be able to suck the life out of any computer as the basic problem is technically unsolvable, every year this community welcomes the newer, faster machines with open arms. This is not to say that Radiance is the be-all and end-all of lighting simulation tools. Clearly, it's algorithms could benefit from additional improvements, and the photon map is clearly one example.
I can image that, if machines where as capable back then as they are now, something like the pmap would already have been written long ago. Greg himself called for a forward module in the 'Rendering with Radiance' book. This holds especially, as pmap is no exotic stuff, its not some hacking performed by some freak sitting in his cellar behind his machines (OK, in this case it IS programmed by a freak, but thats a different story ... :-)) , there's a research institute behind it, there a validation measurements/caculations going on, apparently being successful in a wide range of cases. So, if pmap right now doesn't support some of the strange transdatafuncwhatever primitives, which maybe one in a thousand users uses once in his lifetime, is that a reason to exclude this tool from the distribution? Was Radiance as complete and almost bug-free as it is now right from the beginning? Work in progress, thats what software is about!
In a research institution, this is true, and for many years, Radiance was a research tool and nothing more. Now, there are a number of engineering/design firms that rely on it, and subjecting them to new features that are not field-tested is more of a problem than it once was. However, you make a good point. How can the software improve if it isn't allowed to change? Here's where we can look at the process behind high quality commercial software development. As I understand it, real software goes something like this:
A) Maintain and support current version of software with patches, bug fixes, etc., while simultaneously:
B) Developing and testing new version "in house" and with select alpha users who are happy to ride the crest of the wave knowing full well the risk of wiping out.
C) When the alpha testers are satisfied that the new additions are working, release to a larger number of beta testers, who employ the software in the field, until:
D) The new version is considered ready for release. At that point, the old version is phased out, and users are encouraged (if not forced) to upgrade to the new release if they want support.
The obvious problem with the above is that it assumes:
1) There is a large user base from which to pick alpha and beta testers and get money, and
2) There is a large group of closely cooperating program developers with at least one product manager
Neither are the case for Radiance, and we have to face this reality. Our user base is small, there is no money for development, and the number of programmers available to fix problems can be counted on one hand by a tree sloth. Obviously, we need to modify this approach to have a working development pipeline. Here is what we currently have:
A) A few programmers (well, two at the moment) maintain and support a "live" release of the code, which includes bug fixes as well as some new features that may be under construction, but which do not affect the operation/reliability of the main tools.
B) Everyone who wants the latest bug fixes volunteers for both alpha and beta testing by default.
C) A new "official" release is made when the code seems to have more-or-less stabalized.
D) Go directly to A. Do not pass Go; do not collect $200.
The up side of this process is that the next release should be relatively bug-free, because folks all along have been compiling it and using it on various systems. (This was NOT true of the 3.5 official release, which didn't benefit from the accessible CVS HEAD we have now.) The down side is that the developers can't take big risks during development that might impair or corrupt the HEAD code tree. All our check-ins have to be good ones -- a difficult goal when you're attempting any major changes, to be sure. If I weren't using Radiance all the time in my work these days, I would never attempt to modify it -- only fix bugs. Because I do use it, I have some assurance that my HEAD changes are working, so long as my renders proceed nicely, as I don't tend to change code I'm not using at the moment. This is the main benefit of a mature, stable system.
Given that people rely on this software as I do, it would be irresponsible for me to make changes and additions that undermined its basic functionality. When I add some major new facility to the system, I try to make it a separable piece, so that users can choose to apply it or not. If they don't, it shouldn't affect their work. If they do choose to apply a new feature, it should interact with the rest of the system in a predictable way. If there is some feature that is not supported, such as a material type that is just too difficult to deal with, a fall-back solution should be applied and a warning message should appear. Crashing behavior and fatal errors caused by perfectly valid input should be avoided at all costs, especially in the core tools. If the photon map code doesn't support BRTDfunc's (or whatever), they can grab the diffuse parameters and simulated it that way, posting a warning so the user is aware of it. If it is too difficult to post a warning message, we must at minimum document the shortcoming, but we should not prevent the use of a particular model just because it uses an unsupported material. This is my opinion and my practice, not some kind of law. If people disagree, let's hear it.
Many years ago, while I was still at LBL, I made an aborted start on a forward-tracing tool. I abandoned it when I realized that it was a sizeable effort, and there was no funding forthcoming. It wasn't because computers weren't fast enough or didn't have enough memory as Carsten supposes -- the problem was and always seems to be insufficient time and/or money. My concept was to create a tool completely separate from the renderer that would supplant mkillum and work much in the same way, producing illum sources that could then be applied by rpict/rview/rtrace. This would ensure that the existing system continued to work as it always had, without any new options or odd behavior. If Roland had implemented the system in the same way, there would be no hesitation from any of us to include it in the next release, because there would be no risk to existing programs. Users could apply it or not, and they wouldn't have to decide at compilation time whether it was useful or not.
A lot of argumentation has already been layed out in the preceding thread, so I don't need to repeat it. I don't see any technical reason to exclude the tool, especially as it only interferes if switched on deliberately.
But apart from the technical facts, there are other things in life, e.g. there's politics. So far, Radiance is connected to the name of Greg Ward (and LBNL), and I understand that there is a high concern that the program maintains its high quality standard and is kept free of whicky whacky gimmicks. But looking at the complexity of the matter, I doubt that one person alone, no matter how fit he is, will be able to administer and futher develop the software. So the real question is, is there a will to continue the development as a team effort, inevitably including code writen by someone else? What's the alternative? Don't move on and become out-of date and finally forgotten? Not a real alternative, I think...
I couldn't agree more. I don't have time to advance Radiance as I would like to, and no one is paying me to do so, either. I have a family to support and bills to pay, and less leisure time than I would like. LBNL has done its best to support the system and continue development through lean times, but they released it as OpenSource with the express hope that others would take responsibility for its further development. Volunteers, welcome! Trust, however, is still an issue. I have difficulty trusting code I haven't used myself, and the same difficulty recommending it to others when asked. Likewise, LBL has asked me to make sure that all official development efforts maintain the basic program validity, so that they feel safe continuing to have their name and reputation tied to it. Given the circumstances surrounding Roland's photon mapping addition, I have every reason to believe that it performs well and is a useful and valuable addition. However, that isn't the same as saying that I've tried it and it's as solid as the rest of the system.
I am not a dictator, even if I sometimes sound like one. I'm more like a custodian. Radiance is a useful tool despite its age, and I rely on it myself for the majority of my projects. For selfish reasons, I wish to keep it stable, and I'm loathe to trust additions I haven't made myself or assured myself of their sanity. In the process of keeping it solid for my own purposes, I end up keeping it solid for other users as well. Years of reading Schorsch on this list, hearing his suggestions, and seeing his code modifications have brought me to a point of trust, where I can well believe that the changes he makes are not going to screw me up, or screw anyone else up. I trust him to watch over the code while I'm away, so to speak.
I don't think I'm alone with the trust issue. I know a number of groups, who are using Radiance in their work, but still have not upgraded to 3.5 out of concerns of its reliability. They wait until they have some compelling reason to upgrade, and given that new Radiance releases generally have only modest improvements over previous releases, I can't say I blame them for hanging back. On the other hand, maybe a cool new addition is just what's needed to encourage people to upgrade. Who knows?
Why don't we take a survey? Anyone who has read this far has to be interested in these issues, so let's hear from you.
Q1: Have you tried the photon map addition? Why or why not?
Q2: Would you try the photon map if it were added to the HEAD release?
Q3: If the photon map were added to the next official release, would it:
A) Encourage you to upgrade?
B) Discourage you from upgrading?
B) Make you want to wait and see what others thought of it, first?
C) Make no difference to your decision?
Q4: What features would you really like to see added to Radiance?
Q5: What is your main complaint about Radiance from a user's standpoint?
Q6: What do you think of the current development arrangements?
Q6a: Would you like to see more people involved in Radiance development?
Q6b: Would you like to participate in develoment, yourself?
If others agree, I'm happy to accept Roland's code into the HEAD release as a conditional compile. This seems reasonable enough to me; that way, people who want to try it can just flip a switch and give it a go, and those who have no use for it can leave it out. We should learn quickly enough what the problems are that way. I'd like to make a goal of switching on the flag by default in the next official release if we go this route, assuming there are no unresolved problems.
As I said, I would have preferred this as a completely separate tool rather than something built into the renderer. The renderer is complicated enough as it is, and it's quite a challenge to add to it without bending or breaking some other piece -- this is the crux of my hesitation. As an experiment, however, I see no reason not to proceed. We could add it as a CVS branch initially if we want to be paranoid about it, but I'm happy to defer to others on that decision.