I am thinking about adding a "mesh" primitive to Radiance, which would be the first new geometric primitive since the system's inception. The purpose is to facilitate arbitrary shapes that are inherently complex, minimize the associated memory costs, and improve rendering quality and efficiency. Currently, meshes are represented in memory as individual polygons, each of which incurs an overhead of 76 bytes, plus the space required by N double-precision 3-points, which is 76+3*3*8, or 124 bytes in the typical case of a triangle.

In a well-constructed t-mesh, each non-boundary vertex has a valence of 6, which means that we can save a lot of memory by sharing vertices rather than repeating them in individual triangles. Furthermore, mesh vertices can be constrained to fit within a bounding box, permitting them to be represented by 32-bit integers rather than 64-bit doubles, which saves another factor of two. We have to add back in some overhead for vertex indexing, but I think I can reduce this to one byte per reference using face grouping -- I'll have to think about it some more. The biggest memory savings, though, comes when we specify vertex normals, which currently requires a separate texfunc primitive with 13 real arguments for each face (132 bytes with overhead).

Adding it all together, a smoothed t-mesh with 10,000 vertices (small by today's standards) occupies about 5 Mbytes of memory. Moving to a mesh primitive, I should be able to fit the same mesh into about 150K, including acceleration data structures. This means we should be able to render objects over 30 times as complex as before.

One of the main reasons I never implemented meshes in Radiance is that doing so makes it nearly impossible to leverage the existing ray intersection machinery. With the current arrangement, all the mesh triangles end up in the scene octree (or a local octree if the mesh is instanced), so rays find mesh polygons the same way they find other scene geometry, by traversing an octree. Introducing meshes means instead of encountering of a polygon in the octree, we encounter a mesh comprised of many, many polygons, and we have no idea which of these triangles to test for intersection. Testing them all would is a really bad idea from an efficiency standpoint.

I've given this a little thought, and I think I've come up with an efficient acceleration structure that I can compute quickly on object load that will enable both mesh/octree and mesh/ray intersection testing. All I need to store is 3 orthonormal images on the mesh bounding box, where each image pixel contains the set of triangles that project onto that position (without hidden surface removal). We traverse the mesh bounding box with a ray using a 3DDA (3-diminetional differential analyzer), computing the intersection of the three orthonormal pixel sets at each 3-D voxel. If a triangle is in all three sets, that means we are within its local bounding box, and should test it for intersection with the ray.

Another bonus we'll get with this implementation is something Radiance has never had -- local (u,v) coordinates! These can be stored wtih our vertices and made available for patterns and textures through the function language as new variables, Lu and Lv. Their values will be set in the mesh input file, for which I plan to use Wavefront .OBJ, since it already contains pretty much everything we need to specify a mesh without a lot of fluff. Here's the primitive specification I have in mind:

mod mesh id

1+ mesh_file.obj [xf ..]

0

0+ [smoothing_angle]

The same mesh file may be used by multiple primitives, and all data will be shared as it is with the instance primitive that bears close resemblance. The optional smoothing_angle parameter sets the angle below which faces with unspecified normals will be automatically smoothed. The default value of 0 means that faces will not be smoothed. A value of 5 would smooth faces with initial surface normals less than 5 degrees apart. Values of 90 or greater would even smooth over sharp corners, which probably isn't a good idea.

So, why am I writing all this? Well, mostly because I wanted to get some feedback from people before I went to all this trouble. Do we need meshes or not? Is what we have perfectly adequate, or has it been a nuisance all along? Am I going about it all wrong -- e.g., should I be using subdivision surfaces instead of t-meshes? Smoothed meshes are notorious for creating reflection and refraction problems due to inconsistent normals, which was my other excuse for avoiding them all these years.

Please share your thoughts. I almost posted this to the general mailing list, but thought better of it. If you think it would benefit from a larger forum, I'll reconsider.

-Greg