physically-based landscapes

I have to create some renderings of a space, which offers panoramic views of a distant mountain range. We're doing some glare studies, so the luminance of the mountain is important to me.

I could model the mountains, but because they are so far from my model I'll get ambient leaks, right? Is there some way to exclude the distant geometry from adding values to the ambient cache? I'm interested in doing some daylight animations, so if I could do it with representative 3D geometry (vs. using a texture map) I'd be ever so happy.

Failing that, is there a way to apply a picture of the mountains to a local object? My concern there is that if I have a large object close to the window it will reflect more light into the space than what would actually happen, with the mountain a mile away.

Any pointers appreciated.

P.S. If I had a lightmap of this site, could I use it to do an image-based rendering? All the image-based renderings I've seen are of objects being directly illuminated by the lightmap. But if I put a local ground plane and my building into the center of a lightmap, would the interreflection be computed, and the interior of the space be rendered properly? Would it be "accurate"?

···

----

      Rob Guglielmetti

e. rpg@rumblestrip.org
w. www.rumblestrip.org

Hi Rob,

is there snow on the mountain top? Can you go skiing over there?

Excluding objects form the ambient calculation is done by setting the -ae
parameter to the objects material name (-ae mat_name or -aE
file_with_matnames). So excluding the mountains is no problem, the drawback
is, that Radiance determines the resolution of its ambient cache (-ar
setting) dependent on the scene bounding cube. So a HUUUGE bounding cube
results in values spaced so far apart that probably only one falls into your
room. (Unless you set up -ar extremely high, if everything outside is
excluded, this might even work ...)

But why not tread an intermediate path: set up a 3D landscape model and render
some pictures with it from your viewing point. They contain the exact
radiance values, so in a second step you can map them with colorpict to a
plane made of glow with fourth param set to zero, and put this plane 100 or
200 m apart from your scene. This will keep the blowing up of the scene
reasonable and still deliver correct results. Do some checking if the
settings are right, i.e compare the landsscape rendering with your fake
landscape rendering, one never knows ...)

-cb

Hi Rob,
just one further remark: self-evidently, you have to be shure that the part of
the hemisphere visible form the window of your scene is completely covered by
the image/light maps. So rendering a fisheye view of the outside landscape
and mapping this on a sphere is the preferrable way. Look throug the lib
directory, I think there (or in one of the contributions availabel on the
radsite) is a .cal file included which does this spherical mapping.

-cb

Hi Rob,

You can certainly exclude any geometry you like from the ambient calculation using the -ae option. Each -ae value adds a material to exclude from ambient calculations. Named materials will get the -av value rather than incurring any new interreflection calculations. This means that the mountains will have a rather flat appearance, and the shadows will be too dark if you don't choose a reasonable outdoor value for -av, which in turn could be too bright for your interior. (It's a problem.)

Another option is to capture (using HDR photography) or render the scenery using a fisheye lens in a separate step, then apply the results to the window as a luminance distribution using a fisheye perspective mapping. The Radiance ray/lib/fisheye.cal does a lookup on a 180 degree angular fisheye image (-vta -vh 180 -vv 180). I recommend using two pictures -- a high-resolution for the view out the window and a low-resolution for the light distribution. You can compute the low from the high using pfilt with the -1 option:

pfilt -1 -x 32 -y 32 -r 1 window_hires.pic > window_lores.pic

The scene specification for a west-facing window might look like so:

void colorpict window_pict_hires
9 red green blue window_hires.pic fisheye.cal fish_u fish_v -rz 90
0

window_pict_hires glow window_glow
0
4 1 1 1 0

void colorpict window_pict_lores
9 red green blue window_lores.pic fisheye.cal fish_u fish_v -rz 90
0

window_pict_lores illum window_illum
1 window_glow
0
3 1 1 1

window_illum polygon window
0
12
  window_with_inward_normal

···

------
I haven't tried this, so I can't say for sure if it will work. I started this e-mail before I saw Carsten's, but didn't get a chance to finish it until today.

-Greg

From: Rob Guglielmetti <rpg@rumblestrip.org>

I have to create some renderings of a space, which offers panoramic views of a distant mountain range. We're doing some glare studies, so the luminance of the mountain is important to me.

I could model the mountains, but because they are so far from my model I'll get ambient leaks, right? Is there some way to exclude the distant geometry from adding values to the ambient cache? I'm interested in doing some daylight animations, so if I could do it with representative 3D geometry (vs. using a texture map) I'd be ever so happy.

Failing that, is there a way to apply a picture of the mountains to a local object? My concern there is that if I have a large object close to the window it will reflect more light into the space than what would actually happen, with the mountain a mile away.

Any pointers appreciated.

P.S. If I had a lightmap of this site, could I use it to do an image-based rendering? All the image-based renderings I've seen are of objects being directly illuminated by the lightmap. But if I put a local ground plane and my building into the center of a lightmap, would the interreflection be computed, and the interior of the space be rendered properly? Would it be "accurate"?

Hi Rob,

is there snow on the mountain top? Can you go skiing over there?

Yes and no. There is snow on the peak, but I cannot go skiing over there, because if you strapped a pair of skis to my feet and pushed me down a snow covered hill, I'd turn into a dangerous projectile for about five seconds before crashing. It's definitely not what I'd call skiing. =8-)

Excluding objects form the ambient calculation is done by setting the -ae
parameter to the objects material name (-ae mat_name or -aE
file_with_matnames). So excluding the mountains is no problem, the drawback
is, that Radiance determines the resolution of its ambient cache (-ar
setting) dependent on the scene bounding cube. So a HUUUGE bounding cube
results in values spaced so far apart that probably only one falls into your
room. (Unless you set up -ar extremely high, if everything outside is
excluded, this might even work ...)

See my response to Greg about this one...

But why not tread an intermediate path: set up a 3D landscape model and render
some pictures with it from your viewing point. They contain the exact
radiance values, so in a second step you can map them with colorpict to a
plane made of glow with fourth param set to zero, and put this plane 100 or
200 m apart from your scene. This will keep the blowing up of the scene
reasonable and still deliver correct results. Do some checking if the
settings are right, i.e compare the landsscape rendering with your fake
landscape rendering, one never knows ...)

OK, so this is basically using Radiance to create a "lightmap", instead of using HDR, yes? I assume I could further increase the accuracy by using your method described here, but then also using mkillum to create illum sources out of the window panes?

Am I making any sense at all??!!

Rob Guglielmetti
rpg@rumblestrip.org
www.rumblestrip.org

···

On Saturday, May 31, 2003, at 05:45 AM, Carsten Bauer wrote:

Hi Rob,

You can certainly exclude any geometry you like from the ambient calculation using the -ae option. Each -ae value adds a material to exclude from ambient calculations. Named materials will get the -av value rather than incurring any new interreflection calculations. This means that the mountains will have a rather flat appearance, and the shadows will be too dark if you don't choose a reasonable outdoor value for -av, which in turn could be too bright for your interior. (It's a problem.)

I knew about the -ae trick. I just thought that since the exterior would still have values, that they could get used inside. Now I realize that the direct calculation is one thing, *computed* ambient values are another, and the *approximated* ambient values (by way of the -av parameter) are another, and they all are separate. Only computed ambient values live in the ambient cache, and only computed ambient values can be re-used elsewhere. Yet another concept that seems obvious now, but didn't an hour ago. OK, so the easy cheat is to exclude all the exterior objects from the ambient calculation, and live with dark shadows on the mountain. But Carsten says that the -ar is based on the scene bounding cube, so even if I exclude the exterior values I need to crank it up, yes?

Another option is to capture (using HDR photography) or render the scenery using a fisheye lens in a separate step, then apply the results to the window as a luminance distribution using a fisheye perspective mapping. The Radiance ray/lib/fisheye.cal does a lookup on a 180 degree angular fisheye image (-vta -vh 180 -vv 180). I recommend using two pictures -- a high-resolution for the view out the window and a low-resolution for the light distribution. You can compute the low from the high using pfilt with the -1 option:

I just wanna make sure I understand this. This is the correct way to achieve what I asked at the end of my email, yes? A method for using HDR lightmaps to illuminate the scene and have a pleasant (and photometrically accurate) view out the window? This would be, in a word, cool.

As I have never used colorpict and have limited experience with illum, I just wanna make sure I get this:
1. I take a hemispherical HDR image of the site.
2. Colorpict and the fisheye.cal file takes the highres picture, and applies it to a plane, and is rotated into the proper orientation. Can you explain the fish_u fish_v parameters?
3. An illum source is created from the low res version of the pic, mapped to the same polygon? Why do you use a low res image for the illum?

The illum's luminous distribution function is the result of applying the lightmap to the window pane, just the same as if I were to use gensky? The colorpict is purely for the view out, it does not contribute to the illuminance of the interior space?

Seems like a lot of work, 'specially for this brain, but it could be worth it. In the short term, I think I need to try one of the other tacks.

Rob Guglielmetti
rpg@rumblestrip.org
www.rumblestrip.org

···

On Sunday, June 1, 2003, at 02:59 PM, Greg Ward wrote:

Dear Radiance users

Continuing the discussion on the use of global illumination, I'd like to
contribute three points to the discussion.

1) Orientation of the HDRI background image on the global environment cube.

2) Lighting ratios for correct exposure when using other light sources in
conjunction with a global illumination background.

3) Excluding materials and geometry with -ae argument in relation to -ar
argument and determination of scene bounding cube values.

···

-----------------------------------------------------------------------

1) When I was first experimenting with a global illumination background,
the background was rendering rotated through 90 degrees. This issue was
mentioned by another user on this thread. I corrected this problem by
simply transposing two values in the CAL file angmap.cal. Different CAD
software sometimes re-orient the CAD model through 90 degrees, so
experimentation may be necessary. Here is the first
version by Paul E. Debevec :

####################################

{
angmap.cal

Convert from directions in the world to coordinates on the angular sphere
image

-z is forward (outer edge of sphere)
+z is backward (center of sphere)
+y is up (toward top of sphere)
}

sb_u = 0.5 + DDx * r;
sb_v = 0.5 + DDy * r;

r = 0.159154943*acos(DDz)/sqrt(DDx*DDx + DDy*DDy);

DDy = Py*norm;
DDx = Px*norm;
DDz = Pz*norm;

norm = 1/sqrt(Py*Py + Px*Px + Pz*Pz);

####################################

Here is the corrected file for use with a cube with inward facing normals
created in CAD software:

####################################

{
angmap2.cal

Convert from directions in the world to coordinates on the angular sphere
image

-z is forward (outer edge of sphere)
+z is backward (center of sphere)
+y is up (toward top of sphere)

DDy and DDz were transposed for correct alignment.
}

sb_u = 0.5 + DDx * r;
sb_v = 0.5 + DDy * r;

r = 0.159154943*acos(DDz)/sqrt(DDx*DDx + DDy*DDy);

DDz = Py*norm;
DDx = Px*norm;
DDy = Pz*norm;

norm = 1/sqrt(Py*Py + Px*Px + Pz*Pz);

####################################

-----------------------------------------------------------------------

2) When using other light sources in conjunction with a global illumination
background, it is necessary to consider the lighting ratios between
different light sources and the global illumination background. Radiance is
so similar to real world lighting that a Radiance user should try and think
like a photographer balancing lighting ratios on a set, for example.

When I first experimented with global illumination backgrounds, I found
that the HDRI image on the global environment cube was
emitting light several thousand times more intense than spotlights in the
scene, so when PFILT exposed for the background, illumination from the
spotlights disappeared. Either the spotlight intensity could be increased
to an appropriate ratio, or in my case, I re-exposed the HDRI background
with PFILT, reducing its intensity by several thousand times before using
it as a global illumination background. Experimentation is often necessary.

-----------------------------------------------------------------------

3) Please can the issue be clarified concerning excluding materials and
geometry with -ae argument in relation to -ar argument, ambient resolution
and determination of scene bounding cube values.

For example, if a small object is surrounded by a huge cube being used as
the global illumination background, it would be desirable for the ambient
resolution to be concerned only with the small object, otherwise a huge -ar
value would be required, adversely influencing rendering time. If the -ae
argument is used to exclude a material and thus the associated geometry
(preferably of the global illumination cube), does the ambient resolution
determined by -ar now only consider the small object and divide the ambient
resolution across this small object, or does it still look at the bounding
cube for the whole scene, including the global illumination cube?

-----------------------------------------------------------------------

Users interested in viewing a high quality render using global illumination
may be interested in viewing a current project (under construction) at the
following Internet address:

www.daiservices.btinternet.co.uk/Virtual_Sculpture/Virtual_Sculpture.htm

John Graham
DAI Services

From: Rob Guglielmetti <rpg@rumblestrip.org>

I knew about the -ae trick. I just thought that since the exterior would still have values, that they could get used inside. Now I realize that the direct calculation is one thing, *computed* ambient values are another, and the *approximated* ambient values (by way of the -av parameter) are another, and they all are separate. Only computed ambient values live in the ambient cache, and only computed ambient values can be re-used elsewhere. Yet another concept that seems obvious now, but didn't an hour ago. OK, so the easy cheat is to exclude all the exterior objects from the ambient calculation, and live with dark shadows on the mountain. But Carsten says that the -ar is based on the scene bounding cube, so even if I exclude the exterior values I need to crank it up, yes?

Yes, though you could set -ar 0 and you might get around this problem. The disadvantage is that the calculation can go a bit nuts in the little corners, but it's only a problem on high-resolution renderings. Rtrace shouldn't be much affected.

I just wanna make sure I understand this. This is the correct way to achieve what I asked at the end of my email, yes? A method for using HDR lightmaps to illuminate the scene and have a pleasant (and photometrically accurate) view out the window? This would be, in a word, cool.

As I have never used colorpict and have limited experience with illum, I just wanna make sure I get this:
1. I take a hemispherical HDR image of the site.
2. Colorpict and the fisheye.cal file takes the highres picture, and applies it to a plane, and is rotated into the proper orientation. Can you explain the fish_u fish_v parameters?

The fish_u and fish_v parameters are computed lookups into the image that convert angles looking out the window into pixel positions. See Chapter 4 in RwR for details.

3. An illum source is created from the low res version of the pic, mapped to the same polygon? Why do you use a low res image for the illum?

You could use mkillum to compute the distribution, but the effect would be to reduce the resolution of the original image, which can be done much faster with pfilt.

The illum's luminous distribution function is the result of applying the lightmap to the window pane, just the same as if I were to use gensky? The colorpict is purely for the view out, it does not contribute to the illuminance of the interior space?

No, it's the same as if you used mkillum. The only difference is that rpict -vta -vh 180 -vv 180 computes the window's light distribution from a single viewpoint, where mkillum would average it over the entire window. If your window is small relative to the closest geometry, the difference is vanishingly small.

Seems like a lot of work, 'specially for this brain, but it could be worth it. In the short term, I think I need to try one of the other tacks.

It's actually not that difficult -- try it.

-Greg

Dear all,

I am very interested in this topic, but I`m not sure if I understood
correctly, I must admit I`m still starting to understand Radiance.

Basically, as I understand, Carsten`s and Greg`s methods differ in that the
first maps the image in a plane situated outside the window and the second
(through an 'ad hoc' function) directly on the window plane. My question is
this:

- If one maps the image in a plane outside the window there will be some
paralax error (though minimum) which can be reduced with distance
- And if the plane is on the window this should be noticeable
- Or does it work like mkillum? would sun patches appear inside the room?
- Or else: is there any way to map a HDR image in a source, something in the
way skies are generated (maybe replacing the skyfunc?) and then use it as
any other sky and/or ground? Somehow, for me this seems to be the most
natural solution.

I hope all this makes any sense. Thank you for your help. Best,

Santiago.

Santiago Torres wrote:

...

Basically, as I understand, Carsten`s and Greg`s methods differ in that the
first maps the image in a plane situated outside the window and the second
(through an 'ad hoc' function) directly on the window plane. My question is
this:

- If one maps the image in a plane outside the window there will be some
paralax error (though minimum) which can be reduced with distance

- And if the plane is on the window this should be noticeable
- Or does it work like mkillum? would sun patches appear inside the room?
- Or else: is there any way to map a HDR image in a source, something in the
way skies are generated (maybe replacing the skyfunc?) and then use it as
any other sky and/or ground? Somehow, for me this seems to be the most
natural solution.

Mapping to a source works like mapping to a polygon, except that the
image u,v coordinates in the cal file depend on the view direction
vector Dx,Dy,Dz rather than on position coordinates Px,Py,Pz . The
formular depends on the type of lense used to take the image, either
fisheye (similar to an -vta image) or perspective wideangle.
With mapping to source, parallax error increases with nearby objects.
Specially when moving inside a room the outside looks a bit 'frozen' if
modelled as a source mapped image only. See
http://www.pab-opto.de/pers/animation/film2.html for an example of
source mapping and animation made in 1994.
Theoretical extra note: Imaging an image mapped to the window polygon
with u,v coordinates depending on position _/and_/ view direction. The
result would be indistinguishable from a 'real' view towards the
outside. Eh voila - that would be a 'lightfield' as used as core idea in
Greg's rholo.
In pratice,for architectural simulation of one building, map the far
surroundings (sky-line, etc) to the source, model the nearby buildings
as 'bricks' and map photos onto their faces. There's an 1999 example of
this at http://www.ise.fhg.de/alt-aber-aktiv/radiance/animation/ ,
though we had not pushed the idea as far as I would had liked, then.

I hope all this makes any sense.

it does. :slight_smile:

-Peter

···

--
pab-opto, Freiburg, Germany, www.pab-opto.de

Hi all,

some more thoughts from my side ... I have the assumption that this is one of
those problems where one can easily miss something in the argumentation.

1) where to put the HDR image:

At the moment I'm a bit puzzled, too. If one maps an HDR image representing a
view of an outside landsacpe onto a window pane, - isn't this only exact for
this special point of view? Other rays from somewhere else in the room
hitting the illum made out of the image , won't they get assigned a wrong
value? (Just read Peter's mail, which in fact gives an answer to this...)

At least this is the thinking behind the idea of mapping the landscape view to
- example - a 100 m radius sphere like a dome over a 10x10x10m scene, thus
making parallax errors reasonably small, by keeping the blowing up of the
scene reasonable, too.. (Perhaps not as small as I think, I have to look at
Peter's examples..) The drawback of this method is, that such a large source
cannot be treated with the direct calc. anymore (illum/light) but must be
used as glow with 4. param zero and thus considered wtihin the ambient
calculation, affording in turn sufficiently high ambient parameters for a
trustworthy rendering. If you use the source type for the dome like Santiago
pointed out, you can add a separate sun like it is done with the usual gensky
procedure.
(Never did this so far, I have to admit ...)

2) -ae / -ar

As far as I know, -ae and -ar are independent parameters. Ambient cache
resolution specification with -ar always is relative to the whole scene
bounding cube, no matter what sort of objects are actually present. But there
might be another way out: imagine a small indoor scene with an adequare -ar
setting resulting in values spaced, say, half a meter apart. If you add a
several km wide outdoor bounding cube to it, you have to raise -ar
respectively. Normally this would result in an exploding memory requirement
for the ambient cache, but if everything in the outdoor part of the scene is
excluded with -ae, only values in the indoor part will be calculated. By this
you can have the same high absolute resolution within the indoor part like
before, and the high -ar setting shouldn't effect calculation time/memory
requremant at all.
(Never tried this, either, but it sounds as if it could work...)

-Carsten

If you photograph or render a fisheye view, you don't have to bother with the spheremap nonsense. A fisheye view of the sky can replace gensky. Likewise for a fisheye view out the window. If you want both the sky and the ground for every possible viewpoint (rather than just out the window), you'll need one fisheye up and one fisheye down (from an bird's eye view). As I recommended for the window, you should use a lower resolution image for the light distribution than you use for the view. You'll learn a lot by trying the example I suggested in my earlier e-mail.

-Greg

···

From: Rob Guglielmetti <rpg@rumblestrip.org>

The illum's luminous distribution function is the result of applying the lightmap to the window pane, just the same as if I were to use gensky? The colorpict is purely for the view out, it does not contribute to the illuminance of the interior space?

No, it's the same as if you used mkillum. The only difference is that rpict -vta -vh 180 -vv 180 computes the window's light distribution from a single viewpoint, where mkillum would average it over the entire window. If your window is small relative to the closest geometry, the difference is vanishingly small.

Well, the window mullions are pretty deep, but other than that, no. Now, your method utilizes a hemispherical fisheye view. Since typical HDR lightmaps are a re-mapping of two HDR images of a mirrored ball, one 90 degrees off-axis to the other, I'm confused as to how my hemispherical image is to be rendered (or photographed). If I do a hemispherical rendering of 0 0 0 0 0 1, I get the sky, but no ground. Vice versa gives me the ground plane but no sky. Do I need a hemispherical view for each cardinal heading, or something like that?

This brings me back to my original inquiry about this. Maybe Santiago asked the question better than I did, in his recent follow-up (Hi Santiago!):

"...is there any way to map a HDR image in a source, something in the
way skies are generated (maybe replacing the skyfunc?) and then use it as any other sky and/or ground? Somehow, for me this seems to be the most natural solution."

I'm imagining either an HDR photograph lightmap of the entire model-encompasing sphere, or a Radiance image of the same type of thing, mapped to a large sphere, in essence. Photometric accuracy of the distribution of the light in the space, as well as the view out the window wall are both important, so I thought this method might combine the two. Sorry if I'm not explaining this well.

OK, even I got it now ... :slight_smile:

the whole problem is quite a nice example in terms of understanding. In fact,
the spheremap nonsense is simply the projection of the fisheye view onto an
object. (the latter certainly unnecessary in this case)

when it comes to picture mapping, one quickly thinks of putting a texture or
something onto an object, in the current case this would be like hanging a
photo of the view outside in front of the window ( or a windowless wall to
cheat the people..) this of course works only from one point of view.

The procedure outlined in Gregs example differs form this, in the way that the
colorpict is used to modifiy - or in fact to create the *angular*
distribution of a lightsource which is put in place of the window pane
(comparable to the luminous intensity distribution data delivered by
luminaire manufacturers). This angular distribution then is of course
independent from the point of view, as it should be.

OK, this all is self evident and not worth to be mentioned at all for the
experienced scientists, but - - -

Carsten

From: cbauer-@t-online.de (Carsten Bauer)

OK, this all is self evident and not worth to be mentioned at all for the
experienced scientists, but - - -

This is anything but intuitive, which is why I keep insisting that Rob try it to see what happens. An angular view becomes the same as a holographic (complete) view as the exterior geometry recedes to infinity.

-Greg

Greg Ward wrote:

This is anything but intuitive, which is why I keep insisting that Rob try it to see what happens.

Yep, I intend to. Life has intervened in this exploration for the moment, so I haven't had a chance to try this yet. Thanks Greg, for the detailed example. I *will* do some experimentation with it, just as soon as I can.

(he withdraws, and exits stage left; lights fade to black.)

-RPG

Hello again,

Thank you Peter for your reply. The animation is really impressive. And I
understand your point about parallax error. I guess there`s always a price
with either method since there`s only one image map to start with. If I
understood well Greg`s post, there are different ways to do this, each with
pluses and minuses, depending in window size or distance to outside objects,
etc. So, I`ve been trying what I could through the weekend with mixed
results.

First, I tried to map a rendered fisheye image of the sky to a source,
replacing gensky, but there`s something strange with my transformations (the
sky appears diamond shaped...:/), so I need to review the mapping, but it
seems to be working. Like Rob said, the possibilities are really cool [Hi
Rob!]

I was thinking on taking HDRs of inside and outside of a building (at the
same time) and then compare the results with inside renders using a mapped
sky for outside. Has anybody tried it?

OTOH, I defined a simple room with a sky outside (using gensky) and then
took fisheye images of the sky and the view out of the window. I followed
Greg`s instructions with fiseye.cal, and it worked perfectly (of course!)
But when taking rtrace measurings, there was a slight difference between the
model with the original sky and the mapped model (about 4.5%) Could this be
because of the relative size of the window (1/9 of the wall in this case)?
or maybe I`m doing something wrong? I assume the rtrace parameters have
nothing to do since they`re the same in both cases, but maybe I`m wrong
(again?)

I understand all this was meant to help Rob with a particular practical
problem but I was interested in the more general case. I am now in the
process of measuring daylight in a real building, and I thought this could
be a good way to validate results with renderings (I will nedd to use
renderings later) Then, sorry if I diverge from the original post. And
thanks again for your help.

Santiago.

First, I tried to map a rendered fisheye image of the sky to a source, replacing gensky, but there`s something strange with my transformations (the sky appears diamond shaped...:/),

This is probably because the 180 degree "ring" of your fisheye image (i.e., the horizon if the fisheye is pointing at zenith), needs to inscribe the square defined by the dimensions of the image. Everything in the "corners" of the image will be looking down towards the ground and can be ignored or blacked-out.

-Chas

Hello again,

I`ve been trying more, and the mapping now works fine, so the sky finally
looks like the original picture. However, I still have problems with the
illuminance values. I am defining it like this:

void colorpict skyfunc
7 noop noop noop sky_orig.pic fisheye.cal fish_u fish_v -rx 90 -rz 180
0
1 1

And then using skyfunc normally, like from gensky. The "mapped" sky gives me
an illuminance 5 to 7 times lower than the original sky (although the
distribution is the same). I could define the sky source seven times
stronger to compensate, but I`m sure there must be an explanation for this
and a correct solution. Please, what am I doing wrong?

Another problem is when I apply the sky to a scene with a room and a window.
If I follow Greg`s method to define the window, it works perfect, however if
I use the mapping to define a sky and then define a window as I would do
with a "gensky`d" sky, there seems to be no direct sunlight in the room.
Again, what am I doing wrong?

I assumed that the mapping to source could replace gensky more or less
directly, but it seems I am missing something. Sorry for bothering again and
thanks in advance for your help,

Santiago.

Hi Santiago,

From: "Santiago Torres" <tiago@tkh.att.ne.jp>

I`ve been trying more, and the mapping now works fine, so the sky finally
looks like the original picture. However, I still have problems with the
illuminance values. I am defining it like this:

void colorpict skyfunc
7 noop noop noop sky_orig.pic fisheye.cal fish_u fish_v -rx 90 -rz 180
0
1 1

What is the last real argument (A1=1) for? I think fisheye.cal just ignores it.

And then using skyfunc normally, like from gensky. The "mapped" sky gives me
an illuminance 5 to 7 times lower than the original sky (although the
distribution is the same). I could define the sky source seven times
stronger to compensate, but I`m sure there must be an explanation for this
and a correct solution. Please, what am I doing wrong?

What is in sky_orig.pic? Is it possible that you exposed it with pfilt or something -- you need to have the original values in there. The direct output of rpict should work, but passing it through pfilt would not.

Another problem is when I apply the sky to a scene with a room and a window.
If I follow Greg`s method to define the window, it works perfect, however if
I use the mapping to define a sky and then define a window as I would do
with a "gensky`d" sky, there seems to be no direct sunlight in the room.
Again, what am I doing wrong?

Gensky creates a separate source for the sun, whereas it gets included as a small spot in sky_orig.pic, which the indirect calculation is hard-pressed to find. To get it to work as gensky, you would have to take the solar source from gensky and reapply it in your model.

-Greg

Hello. Thank you very much for your reply.

From: Greg Ward <gward@lmi.net>

>
> void colorpict skyfunc
> 7 noop noop noop sky_orig.pic fisheye.cal fish_u fish_v -rx 90 -rz 180
> 0
> 1 1

What is the last real argument (A1=1) for? I think fisheye.cal just
ignores it.

Yes, that`s right. It`s my mistake, I was trying something different first
and it just stayed there, I didn`t realize.

What is in sky_orig.pic? Is it possible that you exposed it with pfilt
or something -- you need to have the original values in there. The
direct output of rpict should work, but passing it through pfilt would
not.

I made it like this:

rpict -vta -vd 0 0 1 -vu 0 1 0 -vv 180 -vh 180 -ab 1 -x 2048 -y 2048
sky_orig.oct > sky_orig.pic

And used that file. sky_orig.rad is a copy of the sky deffinition in the
Radiance tutorial. Then tested illuminances with:

echo "0 0 0 0 0 1" | rtrace -h -I+ -w -ab 1 sky_alter.oct

Gensky creates a separate source for the sun, whereas it gets included
as a small spot in sky_orig.pic, which the indirect calculation is
hard-pressed to find. To get it to work as gensky, you would have to
take the solar source from gensky and reapply it in your model.

Thank you, I will try it. But I still don`t understand why the mapping to
the window works fine, if it is also coming from a pic file. Isn`t it the
same if it is comming from the window or from the sky source?

Thanks again for your help and best regards,

Santiago.