Various texture mappings at rendertime

Good day all,
I've got a few questions about texture mappings which are so prevalent in other less validated rendering engines. I understand that these texture mappings aren't usually in the Radiance vocabulary as usually simple materials, or even greyscale materials with a focus purely on luminance is used. But hey :slight_smile:
Displacement maps:

Some other rendering engines (which may not be scientifically validated, such as Pixar's Renderman) support a form of displacement mapping that is applied at render time. That is, the lower resolution mesh is fed into the rendering engine along with a bitmap that represents a height map, which is then used to displace the mesh.
Please note that this is not the same as a normal map / bump map, which simply perturbs the surface normals but does not actually shift the geometry itself.
Is this possible in Radiance?
Normal maps:
As a side question, I encountered the original discussion about bump mapping / normal mapping by Simon and Greg back in 1992 about using texdata along with a custom script to create the data files. I've recreated this method successfully but am curious as to whether there is something now more built-in :slight_smile: (yes, I saw the texpict patch but it is now outdated) Is there?
Occlusion maps:
Diffuse maps:
This is basically colorpict in action. From my understanding, there is no problem in using this to add further realism and rgb relevant analysis to the Radiance render as long as the picture used has pixels that aren't merely artistic, but actually do represent the Rgb reflectance values of the material. Where can I learn more about how to create and find repositories of these types of pictures? I am cautious that if I simply use any old picture online, it may incorrectly skew the luminance analysis.
UV mapping with coordinates:
I played a bit with maps and got familiar with the various transforms, pic_u/v and tile_u/v options to place the maps to scale. At first glance I did not notice anything in the refman about using more sophisticated UV coordinates (for example, map UV coordinates directly to coordinates in a polygon) Is this possible?
Sorry for the barrage of questions.
Kind regards, Dion

路路路

From my understanding, the usage of occlusion maps to express ambient occlusion is not a thing in Radiance. Although it is perhaps technically possible to achieve, it only serves to create artificial peak thresholds of the Rgb reflectance values that is ignorant of the actual lighting being used in the scene. Instead, the occlusion should be calculated by Radiance itself by nature of less bounces near corners. Is this understanding correct? If so, I am curious why it is so popular in other "photorealistic" rendering engines.

Hi Dion,

I've put a few responses inline, below...

From: Dion Moult <[email protected]>
Date: July 30, 2017 4:15:08 PM PDT

Good day all,

I've got a few questions about texture mappings which are so prevalent in other less validated rendering engines. I understand that these texture mappings aren't usually in the Radiance vocabulary as usually simple materials, or even greyscale materials with a focus purely on luminance is used. But hey :slight_smile:

Displacement maps:

Some other rendering engines (which may not be scientifically validated, such as Pixar's Renderman) support a form of displacement mapping that is applied at render time. That is, the lower resolution mesh is fed into the rendering engine along with a bitmap that represents a height map, which is then used to displace the mesh.

Please note that this is not the same as a normal map / bump map, which simply perturbs the surface normals but does not actually shift the geometry itself.

Is this possible in Radiance?

Not as such. The ray intersection routines in Radiance don't offer surface subdivision capabilities, as are necessary when implementing displacement maps. The complexity of supporting this feature is considerable.

Normal maps:

As a side question, I encountered the original discussion about bump mapping / normal mapping by Simon and Greg back in 1992 about using texdata along with a custom script to create the data files. I've recreated this method successfully but am curious as to whether there is something now more built-in :slight_smile: (yes, I saw the texpict patch but it is now outdated) Is there?

I don't really see the problem converting images to data files if you want that kind of texture/bump map. Is something broken with your method, or you just want to do it the way other renderers do it?

Occlusion maps:

From my understanding, the usage of occlusion maps to express ambient occlusion is not a thing in Radiance. Although it is perhaps technically possible to achieve, it only serves to create artificial peak thresholds of the Rgb reflectance values that is ignorant of the actual lighting being used in the scene. Instead, the occlusion should be calculated by Radiance itself by nature of less bounces near corners. Is this understanding correct? If so, I am curious why it is so popular in other "photorealistic" rendering engines.

The interreflection calculation gives you something close to the right answer, where occlusion maps offer no guarantees and in fact no strict relation to real lighting. They are merely a convenient shortcut that "looks OK" because your eye is a lousy photometer. They are much easier to compute, because you don't need to consider other objects in your scene. That's also why they're wrong.

Diffuse maps:

This is basically colorpict in action. From my understanding, there is no problem in using this to add further realism and rgb relevant analysis to the Radiance render as long as the picture used has pixels that aren't merely artistic, but actually do represent the Rgb reflectance values of the material. Where can I learn more about how to create and find repositories of these types of pictures? I am cautious that if I simply use any old picture online, it may incorrectly skew the luminance analysis.

True enough. So long as your image value multiplied against the diffuse RGB of the material are not greater than 1.0, you stay within what is physically plausible. Since most images convert to Radiance pictures in the 0-1 range, this is fairly easy to ensure. You should still use an RGB value that when multiplied against the average given by "pvalue -h -b -d texture.hdr | total -m" gives you the desired average diffuse surface reflectance.

UV mapping with coordinates:

I played a bit with maps and got familiar with the various transforms, pic_u/v and tile_u/v options to place the maps to scale. At first glance I did not notice anything in the refman about using more sophisticated UV coordinates (for example, map UV coordinates directly to coordinates in a polygon) Is this possible?

If you have (u,v) coordinates in a Wavefront .OBJ file, you can use obj2mesh to preserve them and utilize the "Lu" and "Lv" built-in variables to access them in the .cal file associated with patterns or textures. This is currently the only way to import (u,v) coordinates in Radiance, unfortunately.

路路路

Sorry for the barrage of questions.

Kind regards,
Dion

Bonjour,

Je suis actuellement en vacances. Je serai de retour le 7 ao没t 2017.

En cas d'urgence, vous pouvez toujours appeler le num茅ro g茅n茅ral d'Estia : +41 (0) 21/510.59.59 ou envoyer un mail 脿 l'adresse [email protected].

Pour toutes questions relatives 脿 DIAL+, merci d'utiliser l'adresse mail [email protected].

Cordialement

Julien Boutillier
Estia SA

Thanks Greg, and sorry for the late reply.
It's a pity for displacement maps, but completely understandable :slight_smile:
For normal maps there's nothing wrong with doing the data conversion, just wondering if there was something built in now to skip this step :slight_smile:
Sounds good to hear the occlusion maps really shouldn't be a thing. Still curious why they are so popular - yes, they are cheaper to compute, but is that it? Maybe other rendering engines aren't that good at occlusion?
Thanks for the diffuse tip. I assume that I should also try to match the average rgb reflectance values if they are known? Or why did you just focus on brightness?
Haven't tried the UV mapping yet, will let you know how it goes :slight_smile:

路路路

-------- Original message --------From: Greg Ward <[email protected]> Date: 31/07/2017 14:11 (GMT+10:00) To: Radiance general discussion <[email protected]> Subject: Re: [Radiance-general] Various texture mappings at rendertime
Hi Dion,
I've put a few responses inline, below...
From: Dion Moult <[email protected]>
Date: July 30, 2017 4:15:08 PM PDT

Good day all,
I've got a few questions about texture mappings which are so prevalent in other less validated rendering engines. I understand that these texture mappings aren't usually in the Radiance vocabulary as usually simple materials, or even greyscale materials with a focus purely on luminance is used. But hey :slight_smile:
Displacement maps:

Some other rendering engines (which may not be scientifically validated, such as Pixar's Renderman) support a form of displacement mapping that is applied at render time. That is, the lower resolution mesh is fed into the rendering engine along with a bitmap that represents a height map, which is then used to displace the mesh.
Please note that this is not the same as a normal map / bump map, which simply perturbs the surface normals but does not actually shift the geometry itself.
Is this possible in Radiance?
Not as such. The ray intersection routines in Radiance don't offer surface subdivision capabilities, as are necessary when implementing displacement maps. The complexity of supporting this feature is considerable.
Normal maps:
As a side question, I encountered the original discussion about bump mapping / normal mapping by Simon and Greg back in 1992 about using texdata along with a custom script to create the data files. I've recreated this method successfully but am curious as to whether there is something now more built-in :slight_smile: (yes, I saw the texpict patch but it is now outdated) Is there?
I don't really see the problem converting images to data files if you want that kind of texture/bump map. Is something broken with your method, or you just want to do it the way other renderers do it?
Occlusion maps:
From my understanding, the usage of occlusion maps to express ambient occlusion is not a thing in Radiance. Although it is perhaps technically possible to achieve, it only serves to create artificial peak thresholds of the Rgb reflectance values that is ignorant of the actual lighting being used in the scene. Instead, the occlusion should be calculated by Radiance itself by nature of less bounces near corners. Is this understanding correct? If so, I am curious why it is so popular in other "photorealistic" rendering engines.
The interreflection calculation gives you something close to the right answer, where occlusion maps offer no guarantees and in fact no strict relation to real lighting. They are merely a convenient shortcut that "looks OK" because your eye is a lousy photometer. They are much easier to compute, because you don't need to consider other objects in your scene. That's also why they're wrong.
Diffuse maps:
This is basically colorpict in action. From my understanding, there is no problem in using this to add further realism and rgb relevant analysis to the Radiance render as long as the picture used has pixels that aren't merely artistic, but actually do represent the Rgb reflectance values of the material. Where can I learn more about how to create and find repositories of these types of pictures? I am cautious that if I simply use any old picture online, it may incorrectly skew the luminance analysis.
True enough. So long as your image value multiplied against the diffuse RGB of the material are not greater than 1.0, you stay within what is physically plausible. Since most images convert to Radiance pictures in the 0-1 range, this is fairly easy to ensure. You should still use an RGB value that when multiplied against the average given by "pvalue -h -b -d texture.hdr | total -m" gives you the desired average diffuse surface reflectance.
UV mapping with coordinates:
I played a bit with maps and got familiar with the various transforms, pic_u/v and tile_u/v options to place the maps to scale. At first glance I did not notice anything in the refman about using more sophisticated UV coordinates (for example, map UV coordinates directly to coordinates in a polygon) Is this possible?
If you have (u,v) coordinates in a Wavefront .OBJ file, you can use obj2mesh to preserve them and utilize the "Lu" and "Lv" built-in variables to access them in the .cal file associated with patterns or textures. This is currently the only way to import (u,v) coordinates in Radiance, unfortunately.
Sorry for the barrage of questions.
Kind regards, Dion

Hi Dion,

As I said, occlusion maps 'look OK' as well as being cheaper to compute. You can also store them with the object as opposed to recalculating them for each new scene, which makes them fit conveniently into the same category as texture, bump, and displacement maps. I don't know if anyone has worked on this, but there might be a way to fit some kind of "near field" interreflection information with occlusion maps to make them a little more realistic. You will still have the existing problems with contact shadows and the like, however.

I mentioned brightness assuming that luminance is what you can conveniently measure. If you can measure RGB somehow, then by all means calibrate to that.

Cheers,
-Greg

路路路

From: Dion Moult <[email protected]>
Date: August 6, 2017 3:59:25 PM PDT

Thanks Greg, and sorry for the late reply.

It's a pity for displacement maps, but completely understandable :slight_smile:

For normal maps there's nothing wrong with doing the data conversion, just wondering if there was something built in now to skip this step :slight_smile:

Sounds good to hear the occlusion maps really shouldn't be a thing. Still curious why they are so popular - yes, they are cheaper to compute, but is that it? Maybe other rendering engines aren't that good at occlusion?

Thanks for the diffuse tip. I assume that I should also try to match the average rgb reflectance values if they are known? Or why did you just focus on brightness?

Haven't tried the UV mapping yet, will let you know how it goes :slight_smile:

-------- Original message --------
From: Greg Ward <[email protected]>
Date: 31/07/2017 14:11 (GMT+10:00)
To: Radiance general discussion <[email protected]>
Subject: Re: [Radiance-general] Various texture mappings at rendertime

Hi Dion,

I've put a few responses inline, below...

From: Dion Moult <[email protected]>
Date: July 30, 2017 4:15:08 PM PDT

Good day all,

I've got a few questions about texture mappings which are so prevalent in other less validated rendering engines. I understand that these texture mappings aren't usually in the Radiance vocabulary as usually simple materials, or even greyscale materials with a focus purely on luminance is used. But hey :slight_smile:

Displacement maps:

Some other rendering engines (which may not be scientifically validated, such as Pixar's Renderman) support a form of displacement mapping that is applied at render time. That is, the lower resolution mesh is fed into the rendering engine along with a bitmap that represents a height map, which is then used to displace the mesh.

Please note that this is not the same as a normal map / bump map, which simply perturbs the surface normals but does not actually shift the geometry itself.

Is this possible in Radiance?

Not as such. The ray intersection routines in Radiance don't offer surface subdivision capabilities, as are necessary when implementing displacement maps. The complexity of supporting this feature is considerable.

Normal maps:

As a side question, I encountered the original discussion about bump mapping / normal mapping by Simon and Greg back in 1992 about using texdata along with a custom script to create the data files. I've recreated this method successfully but am curious as to whether there is something now more built-in :slight_smile: (yes, I saw the texpict patch but it is now outdated) Is there?

I don't really see the problem converting images to data files if you want that kind of texture/bump map. Is something broken with your method, or you just want to do it the way other renderers do it?

Occlusion maps:

From my understanding, the usage of occlusion maps to express ambient occlusion is not a thing in Radiance. Although it is perhaps technically possible to achieve, it only serves to create artificial peak thresholds of the Rgb reflectance values that is ignorant of the actual lighting being used in the scene. Instead, the occlusion should be calculated by Radiance itself by nature of less bounces near corners. Is this understanding correct? If so, I am curious why it is so popular in other "photorealistic" rendering engines.

The interreflection calculation gives you something close to the right answer, where occlusion maps offer no guarantees and in fact no strict relation to real lighting. They are merely a convenient shortcut that "looks OK" because your eye is a lousy photometer. They are much easier to compute, because you don't need to consider other objects in your scene. That's also why they're wrong.

Diffuse maps:

This is basically colorpict in action. From my understanding, there is no problem in using this to add further realism and rgb relevant analysis to the Radiance render as long as the picture used has pixels that aren't merely artistic, but actually do represent the Rgb reflectance values of the material. Where can I learn more about how to create and find repositories of these types of pictures? I am cautious that if I simply use any old picture online, it may incorrectly skew the luminance analysis.

True enough. So long as your image value multiplied against the diffuse RGB of the material are not greater than 1.0, you stay within what is physically plausible. Since most images convert to Radiance pictures in the 0-1 range, this is fairly easy to ensure. You should still use an RGB value that when multiplied against the average given by "pvalue -h -b -d texture.hdr | total -m" gives you the desired average diffuse surface reflectance.

UV mapping with coordinates:

I played a bit with maps and got familiar with the various transforms, pic_u/v and tile_u/v options to place the maps to scale. At first glance I did not notice anything in the refman about using more sophisticated UV coordinates (for example, map UV coordinates directly to coordinates in a polygon) Is this possible?

If you have (u,v) coordinates in a Wavefront .OBJ file, you can use obj2mesh to preserve them and utilize the "Lu" and "Lv" built-in variables to access them in the .cal file associated with patterns or textures. This is currently the only way to import (u,v) coordinates in Radiance, unfortunately.

Sorry for the barrage of questions.

Kind regards,
Dion

Hi Dion.
The VI-Suite will turn images and tangential normal maps into Radiance patterns and textures. Still uses Texdata but it is at least semi automated for you.
Link to patterns video tutorial here https://youtu.be/0RXTu-brBZI
Link to textures video tutorial here https://youtu.be/K_nloppSRlg

Regards
Ryan

路路路

___________________________________________________________
This email has been scanned by MessageLabs' Email Security System
on behalf of the University of Brighton. For more information see:
https://staff.brighton.ac.uk/is/computing/Pages/Email/spam.aspx

Thanks, Ryan!

Very cool -- I had not heard about the VI-Suite before:

聽聽http://arts.brighton.ac.uk/projects/vi-suite

Seems to be free and open-source, built atop Blender 3D.

Cheers,
-Greg

路路路

From: Ryan Southall <[email protected]>
Date: August 7, 2017 1:34:39 AM PDT

Hi Dion.
The VI-Suite will turn images and tangential normal maps into Radiance patterns and textures. Still uses Texdata but it is at least semi automated for you.
Link to patterns video tutorial here https://youtu.be/0RXTu-brBZI
Link to textures video tutorial here https://youtu.be/K_nloppSRlg

Regards
Ryan