Hi Peter,
Your reply includes some critical remarks on the subject.
I will try to answer them from my point of view, within the context of
animation (please remember that my project would have taken about 2250
hours (14 weeks) on 4 1000Mhz cpus when rendered as animation, and still
having significant artifacts).
Will it work generally ? With specular/reflecting surfaces and shadows ?
Transparent surfaces (like glass) seem yet another problem for me....
What I will try is using a radiance picture without radiosity and then
change the value of each pixel a little depending on the normal orientation
of that pixel. When the normal points upward, I make the pixel just a liitle
brighter, facing downwards, the pixel becomes a liitle darker, etc.
This is needed for my application, since a non-radiosity interior image
looks very flat (a cube in such a case is hardly recognisable as a cube).
My approach is a little like "vloeken in de kerk" (dutch: using bad language
in the church): I am aware of that. Believe me: I hardly dared to bring
the topic into this discussion (...).
We want to use animation in combination with "real" radiance images in the
editing of a project (so we don't use bad words all the time :).
The animation is meant to provide a way of having some insight in the
spatial behaviour of a building.
When we use another renderers besides radiance however, these images don't
look very good (they are _different_ ). To mention: luminaires, sky
defention, the appearance of 'white' surfaces etc, the overall feeling of
light (even if ab=0 I feel light in the scene - I never felt light in
Lightscape, but ok, that was about 8 years ago).
Will 2d processing be validated to have the same trust as Radiance now
has ?
When I program this 2d processing: certainly not!
That will be far beyond my skills.
I give it a try, when it does what I imagine I am satisfied.
If anybody can do it better: please!!!
Maybe you're on the way towards Wavefront, Renderman or whetever
there is for professional animations, without actually getting their
speed or quality, but loosing a lot of Radiance.
I don't want to use different rendering tools within one project. As I said:
we will use radiance anyway.
Other renderers require other ways of getting the data right. This means:
much extra work within the same deadlines.
There is something else. I worked with a lot of renderers (except renderman)
and what I like about radiance is that we are able to write scripts for
complex situations. For example we did a project in which there were 4
variations on 4 building types on 2 different location-types.
We rendered two views from every possible combination and put these images
in a simple user-interface. The scripting took 2 hours, the renderings
overnight. Is there any renderer that allows me to do this? I really rely on
Radaince in such cases.
IMHO, speedups for animations in Radiance will be a side effect when/if
the core rendering is enhanced (photon map, direct caching maybe) and
validated. Meanwhile, Greg's recommendation of using brute force and
some more CPUs sounds right to me.
To me also. Maybe I am wrong and the whole idea will not work.
I will be happy to show you some images after I finished my homework
I am very curious what the result are. My opinion is very humble in this....
Yet there is another application where I am thinking about: when I know
which object/modifier is represented by a pixel, I easily can put each
object-pixel in a different layer ( as in photoshop) and enhance them
seperately.
This is very usefull in video, for example to dim oversaturated colors a
bit.
Or you may try to render texture maps through Radiance, which are then
glued onto surfaces and final frame rendering is done by other rendering
engines
Why not Radiance in this case?
(which may have more inter-frame coherence too). At least the
interface would be well defined.
Last but not least: Is there a way to do this ? I read some discussions
about this topic, and it seems to me that it does not really work (?). Or am
I wrong here?
Just to mention within this context. Something simular to the above: we work
on 360o panoramic views (2048x2048pixels, rendering times about 10 hours for
one oversampled image, but that doesn't hurt for a single image), which we
map on a slowly rotating cylinder in OpenGL, and then we render these frames
to file. These result are quite good (and fast and reliable/trustable). The
only bad thing is that I am not an educated programmer, and the application
crashes as soon as I write an image to a file... So I've seen the results
only in the application, not on video yet.
Time will fix this sooner or later.
Your mileage may very.
I can't translate this sentence, mileage is not in my dictionary (?)
Regards (nice discussion),
Iebele