Slow rendering with transparant textures

Hi all,

I have a model with about 40.000 polygons, each textured with a mostly transparant image (only about 1% of the texture is visable). Seen from the rendering viewpoint the polygons in the model overlap quite a lot, so one ray might have to "look" through about 50 "transparant" polygons or even more before actually "see" something. I think this is why this particular model renders amazingly slow, even with -ab set to 0, -st 0 and -lr 1. Is there another parameter in control that might speed up the rendering in such case?

-Iebele

void plastic material_17999
0 0 5
0.757813 0.773438 0.753906
0.01 0.01

material_17999 colorpict colormap_17999
7 noneg noneg noneg ./D2P_0595.pic tmesh.cal u v
0
7
2
   -0.01410693 0.02148561 -19.61004639
   -0.00179046 -0.00117557 3.75267568

void mixpict transmap_17999
7 colormap_17999 void green
./a_D2P_0595.pic tmesh.cal u v
0
7
2
   -0.01410693 0.02148561 -19.61004639
   -0.00179046 -0.00117557 3.75267568

transmap_17999 polygon polygon_17999
0 0 9
634.194214 1375.645264 90.465210
655.547729 1343.122681 90.465210
1045.819336 1599.365479 90.465210

Hi Iebele,

One of the slowest things in Radiance is the .cal file evaluations, so minimizing expression evaluations is the key to faster renderings in this case. Having many surfaces for each ray, all having function evaluations, is bound to be slow. Combining or eliminating such calls would help a lot. I don't know enough about your situation to recommend "how."

-Greg

···

From: "iebele" <[email protected]>
Date: July 27, 2008 11:27:29 PM BDT

Hi all,

I have a model with about 40.000 polygons, each textured with a mostly transparant image (only about 1% of the texture is visable). Seen from the rendering viewpoint the polygons in the model overlap quite a lot, so one ray might have to "look" through about 50 "transparant" polygons or even more before actually "see" something. I think this is why this particular model renders amazingly slow, even with -ab set to 0, -st 0 and -lr 1. Is there another parameter in control that might speed up the rendering in such case?

-Iebele

Hi Greg, Ok, thanks. Good to know that this model indeed gets close too the limits. Makes it a lot easier to have a few more days patience with it.
Iebele

···

Hi Iebele,

One of the slowest things in Radiance is the .cal file evaluations, so minimizing expression evaluations is the key to faster renderings in this case. Having many surfaces for each ray, all having function evaluations, is bound to be slow. Combining or eliminating such calls would help a lot. I don't know enough about your situation to recommend "how."

-Greg

From: "iebele" <[email protected]>
Date: July 27, 2008 11:27:29 PM BDT

Hi all,

I have a model with about 40.000 polygons, each textured with a mostly transparant image (only about 1% of the texture is visable). Seen from the rendering viewpoint the polygons in the model overlap quite a lot, so one ray might have to "look" through about 50 "transparant" polygons or even more before actually "see" something. I think this is why this particular model renders amazingly slow, even with -ab set to 0, -st 0 and -lr 1. Is there another parameter in control that might speed up the rendering in such case?

-Iebele

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

One of the slowest things in Radiance is the .cal file evaluations, so minimizing expression evaluations is the key to faster renderings in this case. Having many surfaces for each ray, all having function evaluations, is bound to be slow. Combining or eliminating such calls would help a lot. I don't know enough about your situation to recommend "how."

Greg hi,

I got curious now when I hear about eliminating such calculations. As far as I know, I alsways have to at least rotate anything mapped that I want to appear on a non-horizontal surface. I can avoid scaling, mirroring etc by applying these tranforms to the images using image processing tools such as those coming with radiance. But for avoiding the rotation, I do not see a way (besides rotating the whole model with light sources and views instead of the images if e.g. I want to map most onto a vertical plane). So did you mean something like that when saying "eliminating such calls", or were you thinking about structuring them more efficient?

CU Lars.

Hi Lars,

Although it helps up to a point to reduce the complexity of the .cal expressions, it's more important to minimize the total number of such modifiers in your rendering, since it's the overhead of the call itself that's most significant. For example, adding a functional pattern to another functional pattern that uses a functional texture and surface normal interpolation and a mapped picture with its own (admittedly unavoidable) coordinate transformation, that's going to be a lot slower than combining all the patterns into a single precomputed image and having just one functional texture.

Since Iebele had many mixfunc calls, most of which rendered transparent, this cost quite a bit in overhead. In many cases it's unavoidable, but bear in mind that any calculations using the function files, while wonderfully programmable, cost quite a bit in overhead (i.e., a few times the cost of a ray intersection) plus 2x or more longer than the equivalent compiled math calls.

Does this make sense?

-Greg

···

From: "Lars O. Grobe" <[email protected]>
Date: July 28, 2008 9:32:33 AM BDT

One of the slowest things in Radiance is the .cal file evaluations, so minimizing expression evaluations is the key to faster renderings in this case. Having many surfaces for each ray, all having function evaluations, is bound to be slow. Combining or eliminating such calls would help a lot. I don't know enough about your situation to recommend "how."

Greg hi,

I got curious now when I hear about eliminating such calculations. As far as I know, I alsways have to at least rotate anything mapped that I want to appear on a non-horizontal surface. I can avoid scaling, mirroring etc by applying these tranforms to the images using image processing tools such as those coming with radiance. But for avoiding the rotation, I do not see a way (besides rotating the whole model with light sources and views instead of the images if e.g. I want to map most onto a vertical plane). So did you mean something like that when saying "eliminating such calls", or were you thinking about structuring them more efficient?

CU Lars.

Does this make sense?

Yes, it does :wink: thank you for the explanation Greg! You know that I have a model making rather intensive use of textures here (the Hagia Sophia model), and I was wondering if there were some techniques to apply for a speed-up. But as we do not combine cal-files at all, just map one image per object, I do not see such a possibility any more. It might make sense to have simplified colorpict modifiers doing the most frequently used transformations hard-coded, without calling to the cal-file interpreter - e.g. a colorpictxy, colorpict xz and colorpict yz modifier which would allow to do all mapping in orthogonal models without cal-files. But it would break e.g. xform for such modifiers, and create a mess in the scene language while few models really have a problem with too many mappings. So we have to live with slow mappings :wink:

BTW, as it is related to textures and came to my mind some days ago - it should be quite doable to modify the source of mkillum in a way that it outputs polygons with a prerendered ambient calculation mapped onto them, right? It would be one step towards a nice vrml/x3d-export from radiance scenes, as the x3d- / vrml-viewer could render the direct light, while the view-point independent ambient calculation would be left to radiance. The only problem would be the need of correct surface normals for the whole model.

CU Lars.

Hi Lars,

BTW, as it is related to textures and came to my mind some days ago - it should be quite doable to modify the source of mkillum in a way that it outputs polygons with a prerendered ambient calculation mapped onto them, right? It would be one step towards a nice vrml/x3d-export from radiance scenes, as the x3d- / vrml-viewer could render the direct light, while the view-point independent ambient calculation would be left to radiance. The only problem would be the need of correct surface normals for the whole model.

I don't know if I would call this 'easy' -- you would have to mesh the surfaces in some way, which is not done by mkillum at all. Others have worked on similar converters, but I don't know how far they got. Richard Gillibrand (formerly of Bristol Univ.) and his 2003 workshop presentation springs to mind. Here is a related earlier post, relating also to the presentation by Bernhard Spanlang in 2002.

  http://www.radiance-online.org/pipermail/radiance-dev/2005-January/000551.html

Hope this helps.
-Greg

I don't know if I would call this 'easy' -- you would have to mesh the surfaces in some way, which is not done by mkillum at all.

I thought that calling rtrace for each supported surface type using a fixed resolution and writing the surface with a colorpict modifier linking to the generated pixel image would be enough? Of course this approach would be incredible slow for large models.

Renderparks supported this kind of export. But as it imported only mgf, not native radiance, all texture information was lost.

Well, maybe I imagined this being easier then it actually is...CU Lars.

Hi Lars,

map one image per object, I do not see such a possibility any more. It might make sense to have simplified colorpict modifiers doing the most frequently used transformations hard-coded, without calling to the cal-file interpreter - e.g. a colorpictxy, colorpict xz and colorpict yz modifier which would allow to do all mapping in orthogonal models without cal-files. But it would break e.g. xform for such modifiers, and

Do you mean hard-coded in Radiance here? Such a feature makes sense to me also.
In my case the mixfunc for transparancy is consuming lots of rendering time. Although .cal files makes the whole system very flexible, some features are so commonly used that of course it would be great if these were optimized a bit. At the other hand I imagine that these kind of changes are not that easily done, if it is possible at all, in the first place.

Iebele

Do you mean hard-coded in Radiance here? Such a feature makes sense to me also.
In my case the mixfunc for transparancy is consuming lots of rendering time. Although .cal files makes the whole system very flexible, some features are so commonly used that of course it would be great if these were optimized a bit. At the other hand I imagine that these kind of changes are not that easily done, if it is possible at all, in the first place.

The problem is that while adding such modifiers should be rather easy, they would break consistency in a couple of places. E.g. imagine a scene using a colorpict_xz modifier. Everything would be fine - until you run xform to rotate the scene. Xform would have to replace the optimized modifier now. And having two ways to do the same is always a mess to support. So I guess the rather few cases where this is needed do not rectify the trouble one would introduce in that case.

CU Lars.