increase resolution: BAM! fall off the end of the universe

I have never been able to render final images better than the default
view. If I try to set it to what seems a sensible next quantum leap up
it dies the death of a gigabyte memory model on a megabyte machine.

Is there some wierd cube law here? Is it actually not possible to do
better than 512x512 images? 1024x1024 don't work for me.

(the sizes are notional. Its been a while)

Also, some of the *wonderful* textures such as the finer woodgrains
have very odd effects on time to compute. Is there a FAQ like known
set of textures to avoid for fast render?

Also Also wik: I did a kitchen with a mix of off-white and brushed steel
aluminimum surfaces. I found that the amount of colour picked up by
'gloss' surfaces was increadibly high, but if I didn't select off-whites
for detailed surfaces like tongue-and-groove wood, I got huge brightspots
which wiped out the image unless I wound back the lightbulbs to 5 watt
railway specials. I know that the chrome tap is reflecting part of a
perfectly rendered image of the lightbulb onto every shiny surface within
a 40 foot radius, but now I'm over 40 I can't see those little images
unless I stand real close. Does radiance have to render them? isn't
there some middle ground where it does high definition for some things
but not others?

(I am not a professional. I do not depend on this software. If you do,
and need it to remain pure, I don't disagree. I love this package and think
its one of the neatest bits of s/w I have ever seen, its more critique
than real comment/feature request)

cheers
  -George

George Michaelson wrote:

I did a kitchen with a mix of off-white and brushed steel
aluminimum surfaces. I found that the amount of colour picked up by
'gloss' surfaces was increadibly high, but if I didn't select off-whites
for detailed surfaces like tongue-and-groove wood, I got huge brightspots
which wiped out the image unless I wound back the lightbulbs to 5 watt
railway specials.

Did you notice that you can change the "exposure" to which the
final image is displayed? It has a very similar effect to the
combination of shutter speed and iris opening in a photo camera.
The pcond program mentioned by Peter is just another step beyond
that, which also takes some physiological properties of the human
eye into account, that may depend on the absolute brighness levels.

It could also be that you made your surfaces too glossy. Most people
greatly overestimate the amount of gloss in their visual environment.
In reality, gloss levels for non-metallic surfaces are typically
way below 5% (rather around 1 - 2%).

I know that the chrome tap is reflecting part of a
perfectly rendered image of the lightbulb onto every shiny surface within
a 40 foot radius, but now I'm over 40 I can't see those little images
unless I stand real close. Does radiance have to render them? isn't
there some middle ground where it does high definition for some things
but not others?

It's called pixel resolution.
If you render a picture where the tap is three pixels wide and two
pixels high, then I'm pretty sure you won't recognize the bulb that
is theoretically reflected in its surface. If the tap fills half
of your image, then you probably *want* to recognize the reflected
bulb, and given the right distance, you certainly will. But even
when the final image doesn't show a recognizable mirror image of the
bulb, you still want to notice the sparcle of the reflection, so
there is really no other useful "middle ground" here.

Note btw., that the surrounding surfaces will *not* see a perfect
mirror image, unless you use the "mirror" material for the tap,
which you shouldn't. If you use just a shiny metal, then the diffuse
calculation from other nearby surfaces will hit the tap at most once
or twice, which isn't enough for an exact reflection. Only surfaces
that are very close will again see more, simply because the tap fills
a significant part of their surrounding space.

Btw: In real live, just because your eyes don't want to resolve
that mirror image anymore, doesn't mean that they don't receive
the information about it. If you want to simulate that effect with
Radiance, you can simply apply a soft blur to the final image... :wink:

(I am not a professional. I do not depend on this software. If you do,
and need it to remain pure, I don't disagree.

Yes, we do indeed! (at least for a reasonable definition of "pure")

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch.com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/

> Is there some wierd cube law here? Is it actually not possible to do
> better than 512x512 images? 1024x1024 don't work for me.
>
> (the sizes are notional. Its been a while)

What OS and hardware had you used ? rpict renders and writes the image
line-by-line, so memory consumption doesn't scale quadratically with
image size

(only linearly with image width). Images > 12000x12000 pixels worked well
on machines with <256MB main memory.

Not for me. Dec Alphas, 586 and PIII class boxes with up to 512Mb and it
died. Could be compiler problems? I was using gcc.

> Also, some of the *wonderful* textures such as the finer woodgrains
> have very odd effects on time to compute. Is there a FAQ like known
> set of textures to avoid for fast render?

What usually slows down rendering are (beside ambient bounces) cal files:
Their computing time is approx twice that of compiled code (if I remember t

hat

number right- Greg ?). So the more elaborate the functions, the longer it
takes.

I'm a naieve user. I construct 'furniture' as discrete boxes of surface in
a texture I like and then micro-adjust their position against room walls
until it looks 'right' -It may well be this introduces some overhead that
better constructed objects would avoid.

I still have real problems thinking in the project 3space inside the
model. I feel like a 5 year old trying to use a treasure island map on
the kitchen floor: 3 forward three left two up rotate twice six back...

labelled anchors to get base-relative locations could be nice.

and I think I asked for a simple floating compass a while back, is there
a hack to get one into initial models to confirm various positions? or
a way to float key object points in text over the model?

cheers
  -George

Did you notice that you can change the "exposure" to which the
final image is displayed? It has a very similar effect to the
combination of shutter speed and iris opening in a photo camera.

I have tried that, but I think with stark white surfaces and too much
gloss, its very much like my Pentax MX: I can 'see' the surface fine
but when I dial down the exposure it is either too dark, or localized
good. Somehow ther is a median range of view eyes see, but I can't model
(my fault I suspect: I don't have the theoretical grounding or math)

The pcond program mentioned by Peter is just another step beyond
that, which also takes some physiological properties of the human
eye into account, that may depend on the absolute brighness levels.

I'll play with that. Many thanks.

It could also be that you made your surfaces too glossy. Most people
greatly overestimate the amount of gloss in their visual environment.
In reality, gloss levels for non-metallic surfaces are typically
way below 5% (rather around 1 - 2%).

Thats confusing. face on, short distance you can see very high quality
reflected images in tupak finished MDF for instance. Ok, 6 months later
the grease stains and dust make that more diffuse, but it 'seems' shiney
to me. Again, does the eye do a better job of ignoring that?

I suspect I just need a better mental table of recommended glossyness for stuff

Note btw., that the surrounding surfaces will *not* see a perfect
mirror image, unless you use the "mirror" material for the tap,
which you shouldn't. If you use just a shiny metal, then the diffuse
calculation from other nearby surfaces will hit the tap at most once
or twice, which isn't enough for an exact reflection. Only surfaces
that are very close will again see more, simply because the tap fills
a significant part of their surrounding space.

Perhaps I did something wrong in my design. I found setting a minor colour
change on the metal of fixtures had a huge effect on the tones of the
surfaces that I presumed were reflecting it.

Btw: In real live, just because your eyes don't want to resolve
that mirror image anymore, doesn't mean that they don't receive
the information about it. If you want to simulate that effect with
Radiance, you can simply apply a soft blur to the final image... :wink:

oh my. radiance lens-grease?

-George

George Michaelson wrote:

> It could also be that you made your surfaces too glossy. Most people
> greatly overestimate the amount of gloss in their visual environment.
> In reality, gloss levels for non-metallic surfaces are typically
> way below 5% (rather around 1 - 2%).

Thats confusing. face on, short distance you can see very high quality
reflected images in tupak finished MDF for instance. Ok, 6 months later
the grease stains and dust make that more diffuse, but it 'seems' shiney
to me. Again, does the eye do a better job of ignoring that?

The relative amount of light reflected in a specular way is
independent of the "quality" of that specular reflection. Even if
it amounts for less than 1% of the total reflection, it may still
give you a clear mirror image, as the eye is capable of filtering
out a surprising amount of background noise as created by the
diffusely reflected light.

Remember that the human eye is a highly dynamic system, and not a
static camera. It doesn't snap still images, but scans the scene
by moving around, while usually also varying its own position to
a certain degree. If you're driving through the rain in your car,
then it's a good idea to keep your head moving a little from one
side to the other. This will cause you to see that road sign
shifting behind different raindrops on the windscreen, and helps
the image processing unit in your head to seperate the "real"
information from the visual noise caused by the wet glass. The
same thing happens if you try to discern a mirror image in an
imperfectly reflecting surface.

> Note btw., that the surrounding surfaces will *not* see a perfect
> mirror image, unless you use the "mirror" material for the tap,
> which you shouldn't. If you use just a shiny metal, then the diffuse
> calculation from other nearby surfaces will hit the tap at most once
> or twice, which isn't enough for an exact reflection. Only surfaces
> that are very close will again see more, simply because the tap fills
> a significant part of their surrounding space.

Perhaps I did something wrong in my design. I found setting a minor colour
change on the metal of fixtures had a huge effect on the tones of the
surfaces that I presumed were reflecting it.

Global illumination in action!

You might want to compare your results to a photograph instead of
what you perceive when looking at the scene directly. The human
eye will adapt to the prevailing light color in the scene, and
filter out any global tints that it doesn't consider important.
You're visual apparatus performs a permanent and automatic white
balance correction, which doesn't happen when you make a picture
of the same scene.

Isn't it interesting how we only realize the extremely refined
inner workings of our perception when we try to simulate it by
computer?

-schorsch

···

--
Georg Mischler -- simulations developer -- schorsch at schorsch.com
+schorsch.com+ -- lighting design tools -- http://www.schorsch.com/