animation

Hi all,

I am rendering an animation of an interior scene. At this moment one
frame takes about half an hour to render(*), with fine results per
frame. When I view the frames as an animation however, the artifacts are
"unstable". It looks like Radiance renders shadows,
reflections(specularity) and indirect illumination in a random way.
Sometimes objects also get different color intensities in the rendered
image, as if they were lit in a (radical) other way.
I have rendered a lot of still images in Radiance and never noticed that
two renderings with exactly the same geometry, render-options, viewport
etc. results in slightly different images (!).
How can I solve this? Does anybody knows how to render without these
"moving artifacts"? Is there a random thing in Radiance that we can turn
off ?

Regards,

Iebele

(*) Below are the settings I use for the rendering. I know that I can
increase the quality using different settings, but that is not the point
here. My point is that the overall solution has too many, and
unexpectable differences between rendered images.

The settings are (as they appear in my ran file):
DIRECTORY = /disk11/ani01
OCTREE = glow.oct
VIEWFILE = ani01.vp
START = 1
END = 448
BASENAME = /disk11/ani01/ani01%03d
DISKSPACE = 250
OVERSAMPLE = 4
INTERPOLATE = 0
RESOLUTION = 720 576
render = -dp 1024 -ab 1 -ad 1024 -ar 2048 -as 512 -aa 0.2 -lr 1 -st 0.01
-ps 8 -av 20 20 20
pfilt = -r .7

atelier iebele abel wrote:

Hi all,

I am rendering an animation of an interior scene. At this moment one
frame takes about half an hour to render(*), with fine results per
frame. When I view the frames as an animation however, the artifacts are
"unstable". It looks like Radiance renders shadows,
reflections(specularity) and indirect illumination in a random way.
Sometimes objects also get different color intensities in the rendered
image, as if they were lit in a (radical) other way.
I have rendered a lot of still images in Radiance and never noticed that
two renderings with exactly the same geometry, render-options, viewport
etc. results in slightly different images (!).
How can I solve this? Does anybody knows how to render without these
"moving artifacts"? Is there a random thing in Radiance that we can turn
off ?

stochastic calculations are used as MC integration in specular and
ambient calculations and for efficency reasons in direct and secondary
light sources. Your vastly different images sound like a too optimistic
approximation in the direct calcs (dt, dc options- how many light
sources are there ?), but most animation suffer from varying ambient
calculations. If it's only a walk-through animation with a fixed scene,
it helps to pre-run along the path doing every n-th frame (n=2..10) with
a small resolution and fill an ambient file.

Very probably rendering times will go up for artifacts to go down - You
may want to try the pinterp program to interpolate between frames,
although it uses more diskspace and the controlling script/program gets
more complex (especially using multiple machines to render on).
Interpolation breaks with reflecting/refracting surfaces too, but
otherwise speeds up animation quite a bit.

-Peter

···

Regards,

Iebele

(*) Below are the settings I use for the rendering. I know that I can
increase the quality using different settings, but that is not the point
here. My point is that the overall solution has too many, and
unexpectable differences between rendered images.

The settings are (as they appear in my ran file):
DIRECTORY = /disk11/ani01
OCTREE = glow.oct
VIEWFILE = ani01.vp
START = 1
END = 448
BASENAME = /disk11/ani01/ani01%03d
DISKSPACE = 250
OVERSAMPLE = 4
INTERPOLATE = 0
RESOLUTION = 720 576
render = -dp 1024 -ab 1 -ad 1024 -ar 2048 -as 512 -aa 0.2 -lr 1 -st 0.01
-ps 8 -av 20 20 20
pfilt = -r .7

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

--
pab-opto, Freiburg, Germany, www.pab-opto.de

Hi lebele,

I concur with Georg that you should use the -af option of rpict to save your ambient values, both to accelerate your rendering and to reduce differences between adjacent frames. The -ps 8 option you specify is also going to aggravate sampling errors in your image. I also think you should use a larger value for -lr -- 1 is awfully limiting.

If you want to minimize sampling differences, you can explicitly turn off some of the randomness in Radiance with:
  -dj 0 -sj 0
You will lose penumbras and soft specular reflections in the process, unfortunately.

I agree with Peter that pinterp will speed things up considerably, along with the -af option. (INTERPOLATE=4 is a good starting value).

-Greg

···

I am rendering an animation of an interior scene. At this moment one
frame takes about half an hour to render(*), with fine results per
frame. When I view the frames as an animation however, the artifacts are
"unstable". It looks like Radiance renders shadows,
reflections(specularity) and indirect illumination in a random way.
Sometimes objects also get different color intensities in the rendered
image, as if they were lit in a (radical) other way.
I have rendered a lot of still images in Radiance and never noticed that
two renderings with exactly the same geometry, render-options, viewport
etc. results in slightly different images (!).
How can I solve this? Does anybody knows how to render without these
"moving artifacts"? Is there a random thing in Radiance that we can turn
off ?

Regards,
Iebele

(*) Below are the settings I use for the rendering. I know that I can
increase the quality using different settings, but that is not the point
here. My point is that the overall solution has too many, and
unexpectable differences between rendered images.

The settings are (as they appear in my ran file):
DIRECTORY = /disk11/ani01
OCTREE = glow.oct
VIEWFILE = ani01.vp
START = 1
END = 448
BASENAME = /disk11/ani01/ani01%03d
DISKSPACE = 250
OVERSAMPLE = 4
INTERPOLATE = 0
RESOLUTION = 720 576
render = -dp 1024 -ab 1 -ad 1024 -ar 2048 -as 512 -aa 0.2 -lr 1 -st 0.01
-ps 8 -av 20 20 20
pfilt = -r .7

Thanks Greg and Peter for your replies,

I tried your suggestions, while some already were taken into account. The
results are better, but not that good that my problem is solved (put it in
other words: it does not fit my application yet). Also rendering times are
very long now.
That is why I like to ask you something that is in my mind for over a year.

The general purpose I use Radiance for is presentation and design of
architecture and public exterior lighting. There is no other rendering tool
that offers me the flexibility and kind of images that Radiance produces. What
I also need from Radiance is the photometric correctness of the images, but I
always use the images in a sequence, comparing one image with the other, to
disccuss design issues. That means that the numerical representation of
lighting is less important for me as the 'look'. This allows me to work on
images with image processing tools, this actually destroys some of the
Radiance data, but makes the design better to communicate (for my purposes).

Back to the animation-issue. It takes too much time to render animations for
our application, because we use very large geometry, we use a lot of light
sources, we have to render several minutes, we only (?) have 4 cpu's
available on Linux and we have to produce very good looking images for our
customers (they expect images like they have seen on tv/dvd : sharp sharper
sharpest).
To survive the competition using Radiance (we are using Radiance daily, so we
really don't want to work with something else) we need another approach, and
we are willing to spent our time to get it work.

Taken the above into account, I and my partner are thinking about an image
format like the radiance .pic, that is rendered without ambient bounces (-ab
0). In this (or additional) image format we also like to have the following
data per pixel (ok, these images take lots of diskspace...):

   * name of the modifier for the object this pixel represent (as a string)
   * normal orientation of this object (float float float)
   * position of the pixel in XYZ space (float float float)

With this information per pixel (maybe I need more info in te future) we think
we can produce an automated proces in enhancing radiance pictures that are
rendered without radiosity. This process is 2D image processing, using some 3d
data that is 'stored' with the pixel.
Also this kind of image data allows us to separate objects in one image, to
process them individually in 2D.

We think we can speed up animation times using a combination of 3d and 2d (
rendering my scene with -ab 0 takes about 9 minutes on 4 times PAL res.
against 3 hours with a nice radiosity as you suggested.
Probably far more to achieve the results we really want. 2d/3d image
processing will take about 5 minutes or less per frame ).

I already looked at rtrace, which seems to produce the kind of information I
need. Two problems arose when I looked at rtrace:

1. I don't know where to start
2. I think I prefer changing code in rpict (to use the -S option and -vf
option )
3. I think I prefer changing code in rpict.c (to have a all in one executable,
instead of using scripts)

As you understand, some hints are very much appreciated.
I think about these kind of hints:
-where in rpict.c are pixels stored into a file (or stdout)?
-Is it possible at that point to write the mentioned data to stdout also?
-What are the variables/pointers that hold these data?

The 2D processing of the resulting files will be my work (although I don't
know what kind of project I start here) and I like to share the results with
the 'community', when it finally works as we want.

I hope my questions are not to difficult or time consuming to answer. I would
be really _happy_ to experiment with this. I think it will result in a really
nice tool for video production environments (again, we will work on these
tools, I only need a good starting point for them).

Ok, this starts to get a long email, I hope you believe with me that the above
is worth trying, although it might have a very expiremental character in the
beginning.

Iebele

Hi Iebele !

I dare to mix in the discussion, as the general topic is of some
interest for me, too. I myself thought/still think (and did experiment
as well) about "tricks" of what kind ever to produce good looking images
in less time as currently needed. (Despite the benefits of physical
accuracy, the look of an image is very important for me, too).

By the way, if you use lots of light sources, you may check out that
direct cache stuff, (now the missing patch is included in the
distribution as well, so installation should work, too, sorry for
forgetting it at the first time). As long as you need some ambient
calculation, time saving is quite significant, maybe not as much as you
want, but making things 4 times faster is better than nothing.

Now back to your ideas: I thought of 2D postprocessing, too, for example
some sort of "shadow-softener" for images traced without ambient light,
with help of your idea of 3D-data integration maybe this might become a
realistic thing. There are some ways of tricking in 3D as well. POVRay
for example allows to assign ambient illumination to different objects
directly, it is no problem to integrate this into RADIANCE, too. If
you're interested, I may dig out the stuff and send it to you. (Don't
tell Greg of all this ... :slight_smile: ). Another way is to simulate ambient
light effects by some weak additional and invisible direct sources
suitably placed, as often it suffices to lighten the dark shadows which
appear in standard raytracing to mimick a realistic look. This method
has the further advantage that it is accessible to "feeling" rather than
the dry topic of parameters.

Last but not least, some of the hints you've asked for:

getting the mentioned data out of the machine certainly is the easiest
part of the way your're setting out to tread:

Color output is done in the 'render()' routine within rpict.c, with help
of the 'pixvalue()' (rpict.c) routine giving that color, the rest is not
of interest for this case. Rather than writing data to stdout, it would
be easier to write them directly to a file, as rpict uses stdout for
messages. The data you want to extract are contained in the ray
structure (check ray.h), with exception of the object name (a number is
used instead so a separate mapping to the name will be needed), so
extracting them should be fairly straightforward.

Maybe this suffices as starting point ... off you go, and good luck of
course. And please tell me, if it works.

-- Carsten

From: atelier iebele abel <[email protected]>:

Back to the animation-issue. It takes too much time to render animations for
our application, because we use very large geometry, we use a lot of light
sources, we have to render several minutes, we only (?) have 4 cpu's
available on Linux and we have to produce very good looking images for our
customers (they expect images like they have seen on tv/dvd : sharp sharper
sharpest).
To survive the competition using Radiance (we are using Radiance daily, so we
really don't want to work with something else) we need another approach, and
we are willing to spent our time to get it work.

I don't know about your country, but in the U.S., computers are a whole lot cheaper
than people, and it might be worthwhile to invest in a stack of Linux boxes to run
your animations -- Charles Ehrlich while he was still at LBNL put a tower together
last year with 9 CPUs and a disk array for less than $15K US.

Taken the above into account, I and my partner are thinking about an image
format like the radiance .pic, that is rendered without ambient bounces (-ab
0). In this (or additional) image format we also like to have the following
data per pixel (ok, these images take lots of diskspace...):

   * name of the modifier for the object this pixel represent (as a string)
   * normal orientation of this object (float float float)
   * position of the pixel in XYZ space (float float float)

With this information per pixel (maybe I need more info in te future) we think
we can produce an automated proces in enhancing radiance pictures that are
rendered without radiosity. This process is 2D image processing, using some 3d
data that is 'stored' with the pixel.
Also this kind of image data allows us to separate objects in one image, to
process them individually in 2D.

We think we can speed up animation times using a combination of 3d and 2d (
rendering my scene with -ab 0 takes about 9 minutes on 4 times PAL res.
against 3 hours with a nice radiosity as you suggested.
Probably far more to achieve the results we really want. 2d/3d image
processing will take about 5 minutes or less per frame ).

This reminds me a lot of Ken Perlin's classic 1985 Siggraph paper, where
he uses his noise function with similar pixel information to sythesize images.

I already looked at rtrace, which seems to produce the kind of information I
need. Two problems arose when I looked at rtrace:

1. I don't know where to start
2. I think I prefer changing code in rpict (to use the -S option and -vf
option )
3. I think I prefer changing code in rpict.c (to have a all in one executable,
instead of using scripts)

I strongly suggest you use rtrace, as it is already designed and optimized
for this type of output. When you use the -fff and -omNp options, you will
avoid all recursive ray calls and get a very fast calculation. The rays
for any desired view can be generated with the vwrays command (new
with 3.4). To generate the 10th view in the file "anim.vf", you could use:

sed \-n 10p anim\.vf &gt; frame10\.vf vwrays -ff -vf frame10.vf -pa 0 -x 3072 -y 2304 \
  > rtrace -x 3072 -y 2304 -fff -omNp anim.oct > frame10.data

See the vwrays man page for more details and examples.

-Greg

atelier iebele abel wrote:

   Part 1.1 Type: Plain Text (text/plain)
           Encoding: 7bit

...
With this information per pixel (maybe I need more info in te future) we
think
we can produce an automated proces in enhancing radiance pictures that
are
rendered without radiosity. This process is 2D image processing, using
some 3d
data that is 'stored' with the pixel.
Also this kind of image data allows us to separate objects in one image,
to
process them individually in 2D.
...

Will it work generally ? With specular/reflecting surfaces and shadows ?
Will 2d processing be validated to have the same trust as Radiance now
has ? Maybe you're on the way towards Wavefront, Renderman or whetever
there is for professional animations, without actually getting their
speed or quality, but loosing a lot of Radiance.
IMHO, speedups for animations in Radiance will be a side effect when/if
the core rendering is enhanced (photon map, direct caching maybe) and
validated. Meanwhile, Greg's recommendation of using brute force and
some more CPUs sounds right to me.

Or you may try to render texture maps through Radiance, which are then
glued onto surfaces and final frame rendering is done by other rendering
engines (which may have more inter-frame coherence too). At least the
interface would be well defined. Your mileage may very.

-Peter

···

--
pab-opto, Freiburg, Germany, www.pab-opto.de

Hi all !

I think two aspects have to be separately kept in mind concerning the
whole matter, to avoid useless work or disappointment.

The Radiance ambient treatment is somewhat awkward to handle, and I've
noticed that people without a sound scientific background have
difficulties with the overwhelming features and parameters and all the
thinking behind it, which is a pity as it appears rather often that
designers or architects are not computer specialists or scientists as
well (which shouldn't be misinterpreted now, I do not mean this as
offence, simply an observation).

On the other hand, Radiance, esp. the ambient calculation, simply is a
fantastic thing which allows to create images with superior quality
which outscore everything else. And I readily agree with the opinion
that 2D postprocessing or some other tricks and workarounds never can
replace this adequately and a more promising way is to optimize the
method within itself. Besides, lighting visualisation is a complex
matter, and it is common sense that there is no easy way for
accomplishing difficult tasks.

Of course, Radiance is aimed very much at realistic image generation,
and it would be nice to have more "artistic freedom" in using it. This
is a totally different matter. Within the first stages of a creative
progress scientific exactness does not count that much, and so all kinds
of tricks are welcome as long as they help to produce an impression,
something which can convey ideas. One may compare this to the deliberate
abstraction and individual styles employed in "old-fashioned" techniques
like oil-painting for example. And opening Radiance for these kinds of -
maybe "freaky" - applications certainly is interesting, as then all
stages of a design process could be covered in a better way as it is
possible right now. Of course it then is important to keep track of
what is art, what is trick, what is scientifically exact, etc, so in the
end everything will be even more complex. But complexity makes life
interesting, who wants to be bored ?

-- Carsten

As I wrote in my previous message : "I would be really _happy_ to
experiment with this". Now I am _HAPPY_!

Thanks Greg and Carsten for your replies.
This morning the first thing I did was trying Greg's command line suggestions,
I changed the output to ASCII to explore the results, and: this is exactly
what I meant!
I will work on this as much as time permits me to do, and I will share the
results. This might take a while, therefore I will thank you both at this
moment very, very much.
Carsten, also your suggestions where to look in rpict (ray.h) are very
helpfull. For this moment I will start with Greg's suggestion. The syntax of
the command line is far less complicated as I expected, so this will work for
now.
I changed Greg's line a little, so the output per pixel now is:
modifier-name value normal position
( vwrays -ff -vf view01.vf -pa 0 -x 1000 -y 1000 | rtrace -x 1000 -y 1000
-fff -omvNp model.oct > test.data ).

This is where I can start from, I let you know when I have some results (or
questions), but for now there is a lot to experiment with.

Thanks a lot!

Iebele

Greg Ward wrote:

···

> From: atelier iebele abel <[email protected]>:

> Back to the animation-issue. It takes too much time to render
> animations for
> our application, because we use very large geometry, we use a lot of
> light
> sources, we have to render several minutes, we only (?) have 4 cpu's
> available on Linux and we have to produce very good looking images for
> our
> customers (they expect images like they have seen on tv/dvd : sharp
> sharper
> sharpest).
> To survive the competition using Radiance (we are using Radiance daily,
> so we
> really don't want to work with something else) we need another
> approach, and
> we are willing to spent our time to get it work.

I don't know about your country, but in the U.S., computers are a whole
lot cheaper
than people, and it might be worthwhile to invest in a stack of Linux
boxes to run
your animations -- Charles Ehrlich while he was still at LBNL put a
tower together
last year with 9 CPUs and a disk array for less than $15K US.

> Taken the above into account, I and my partner are thinking about an
> image
> format like the radiance .pic, that is rendered without ambient bounces
> (-ab
> 0). In this (or additional) image format we also like to have the
> following
> data per pixel (ok, these images take lots of diskspace...):
>
> * name of the modifier for the object this pixel represent (as a
> string)
> * normal orientation of this object (float float float)
> * position of the pixel in XYZ space (float float float)
>
> With this information per pixel (maybe I need more info in te future)
> we think
> we can produce an automated proces in enhancing radiance pictures that
> are
> rendered without radiosity. This process is 2D image processing, using
> some 3d
> data that is 'stored' with the pixel.
> Also this kind of image data allows us to separate objects in one
> image, to
> process them individually in 2D.
>
> We think we can speed up animation times using a combination of 3d and
> 2d (
> rendering my scene with -ab 0 takes about 9 minutes on 4 times PAL res.
> against 3 hours with a nice radiosity as you suggested.
> Probably far more to achieve the results we really want. 2d/3d image
> processing will take about 5 minutes or less per frame ).

This reminds me a lot of Ken Perlin's classic 1985 Siggraph paper, where
he uses his noise function with similar pixel information to sythesize
images.

> I already looked at rtrace, which seems to produce the kind of
> information I
> need. Two problems arose when I looked at rtrace:
>
> 1. I don't know where to start
> 2. I think I prefer changing code in rpict (to use the -S option and -vf
> option )
> 3. I think I prefer changing code in rpict.c (to have a all in one
> executable,
> instead of using scripts)

I strongly suggest you use rtrace, as it is already designed and
optimized
for this type of output. When you use the -fff and -omNp options, you
will
avoid all recursive ray calls and get a very fast calculation. The rays
for any desired view can be generated with the vwrays command (new
with 3.4). To generate the 10th view in the file "anim.vf", you could
use:

sed \-n 10p anim\.vf &gt; frame10\.vf vwrays -ff -vf frame10.vf -pa 0 -x 3072 -y 2304 \
        > rtrace -x 3072 -y 2304 -fff -omNp anim.oct > frame10.data

See the vwrays man page for more details and examples.

-Greg

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Hi Peter,

Your reply includes some critical remarks on the subject.
I will try to answer them from my point of view, within the context of
animation (please remember that my project would have taken about 2250
hours (14 weeks) on 4 1000Mhz cpus when rendered as animation, and still
having significant artifacts).

Will it work generally ? With specular/reflecting surfaces and shadows ?

Transparent surfaces (like glass) seem yet another problem for me....

What I will try is using a radiance picture without radiosity and then
change the value of each pixel a little depending on the normal orientation
of that pixel. When the normal points upward, I make the pixel just a liitle
brighter, facing downwards, the pixel becomes a liitle darker, etc.
This is needed for my application, since a non-radiosity interior image
looks very flat (a cube in such a case is hardly recognisable as a cube).

My approach is a little like "vloeken in de kerk" (dutch: using bad language
in the church): I am aware of that. Believe me: I hardly dared to bring
the topic into this discussion (...).
We want to use animation in combination with "real" radiance images in the
editing of a project (so we don't use bad words all the time :).
The animation is meant to provide a way of having some insight in the
spatial behaviour of a building.
When we use another renderers besides radiance however, these images don't
look very good (they are _different_ ). To mention: luminaires, sky
defention, the appearance of 'white' surfaces etc, the overall feeling of
light (even if ab=0 I feel light in the scene - I never felt light in
Lightscape, but ok, that was about 8 years ago).

Will 2d processing be validated to have the same trust as Radiance now
has ?

When I program this 2d processing: certainly not!
That will be far beyond my skills.
I give it a try, when it does what I imagine I am satisfied.
If anybody can do it better: please!!!

Maybe you're on the way towards Wavefront, Renderman or whetever
there is for professional animations, without actually getting their
speed or quality, but loosing a lot of Radiance.

I don't want to use different rendering tools within one project. As I said:
we will use radiance anyway.
Other renderers require other ways of getting the data right. This means:
much extra work within the same deadlines.
There is something else. I worked with a lot of renderers (except renderman)
and what I like about radiance is that we are able to write scripts for
complex situations. For example we did a project in which there were 4
variations on 4 building types on 2 different location-types.
We rendered two views from every possible combination and put these images
in a simple user-interface. The scripting took 2 hours, the renderings
overnight. Is there any renderer that allows me to do this? I really rely on
Radaince in such cases.

IMHO, speedups for animations in Radiance will be a side effect when/if
the core rendering is enhanced (photon map, direct caching maybe) and
validated. Meanwhile, Greg's recommendation of using brute force and
some more CPUs sounds right to me.

To me also. Maybe I am wrong and the whole idea will not work.
I will be happy to show you some images after I finished my homework :slight_smile:
I am very curious what the result are. My opinion is very humble in this....

Yet there is another application where I am thinking about: when I know
which object/modifier is represented by a pixel, I easily can put each
object-pixel in a different layer ( as in photoshop) and enhance them
seperately.
This is very usefull in video, for example to dim oversaturated colors a
bit.

Or you may try to render texture maps through Radiance, which are then
glued onto surfaces and final frame rendering is done by other rendering
engines

Why not Radiance in this case?

(which may have more inter-frame coherence too). At least the
interface would be well defined.

Last but not least: Is there a way to do this ? I read some discussions
about this topic, and it seems to me that it does not really work (?). Or am
I wrong here?

Just to mention within this context. Something simular to the above: we work
on 360o panoramic views (2048x2048pixels, rendering times about 10 hours for
one oversampled image, but that doesn't hurt for a single image), which we
map on a slowly rotating cylinder in OpenGL, and then we render these frames
to file. These result are quite good (and fast and reliable/trustable). The
only bad thing is that I am not an educated programmer, and the application
crashes as soon as I write an image to a file... So I've seen the results
only in the application, not on video yet.
Time will fix this sooner or later.

Your mileage may very.

I can't translate this sentence, mileage is not in my dictionary (?)

Regards (nice discussion),

Iebele

Carsten Bauer wrote:
...

The Radiance ambient treatment is somewhat awkward to handle, and I've
noticed that people without a sound scientific background have
difficulties with the overwhelming features and parameters and all the
thinking behind it, ...

It's true. All the options are awful. However, not a single solution,
both efficient and general, is known that solves the underlying,
fundamental light "rendering equation", without hand tweaking the
algorithms by specifying parameters. That's where technology stands now.
Tweaking simulation to do non-realistic images doesn't really help any
designer in finding a solution that is actually realistic.
...

Of course, Radiance is aimed very much at realistic image generation,
and it would be nice to have more "artistic freedom" in using it. This
is a totally different matter. ...

I'm not quite certain that'll become a main-stream direction of Radiance
development, as the physical quality is the prime motivation to use it.

My experiences with rayshade where excellent- Much easier, many more
features (motion blur built in, more powerful input format, easier
texture mapping). POVray is probably even more powerful and easier to
customize. Both are in the public domain. There are many converters, or
converters could be build to go from one rendering systems to the other.
There's nothing wrong in using another package. There's nothing wrong in
playing with scenes and "artistic freedom" lighting in spaces. There
just aren't convincing reasons why Radiance should leave the path of
strict physical correct simulation.

-Peter

Carsten Bauer wrote:

> Of course, Radiance is aimed very much at realistic image generation,
> and it would be nice to have more "artistic freedom" in using it. This
> is a totally different matter. ...

Isn't that what they call "non-photorealism" nowadays? :^)
Do we really want to *downgrade* RADIANCE with this?

--Roland

···

--
"Life is too short for core dumps"

Hi all !

I think the discussion starts to leave the range this forum was set up
for, but as I don't want to be looked at as the heretic, I simply have
to state something for self defence :slight_smile: .
I really didn't mean Radiance to go for a Disney-style colorful pictures
renderer, there are lots of them already available. But on the other
hand, disregarding physical accuracy here and there is no black magic
either. In fact, it is already offered by Radiance, e.g with the "glow"
primitive and its maximum radius for shadow testing, which helps very
much in certain cases. Almost each task shows small details where the
means to create an effect within a picture / a simulation are not 1:1
copies from the ones applied in the real installation within a building,
and this doesn't have to mean a loss of realistic thinking, it simply is
a question of efficiency. And having more freedom to do so would be no
"downgrading" either.
It might of course be confusing if too many different approaches to an
issue appear within the same tool. In this respect I understand the hint
of relegating "artistic" tasks to other programs. But this often is
horribly inefficient, more time is spent with converting or setting up a
scene a second time in another format, so that I personnally prefer to
do as much as possible with one tool (cf. Iebeles comment..).

And then there is of course the aesthetical dimension. For the reception
of a picture, the question of accuracy is not the only criterium. And
the demands of the public are paradox. One one hand, they want sound,
efficient and realisable solutions, but the same time they want to be
"entertained", want to see something fantastic, impressive. Maybe trying
to bring this together makes the main proportion of the art of
simulation. It is a difficult task, so why restrain oneself by locking
away some sections of the toolbox?

In the end, all is of course a question of individual preferences, so
now I will be quiet. Really. :slight_smile:

-Carsten

Carsten Bauer wrote:

so now I will be quiet. Really. :slight_smile:

I think I will be quiet for a short while too. We now work on the subject
and I really hope we get some useable results, within reasonable time. I am
very pleased with the hints and helps, and the implicit discussion is of
great value too(!), although it indeed might 'leave the range this forum was
set up for'.

See you later!

Iebele