Numerical output of illuminance on 3D models

As the title implies, I’m looking for a way to generate numerical output (i.e. output of calculation points, in the form of illuminance) for (parts of) 3D models.

I’ve researched this topic on and off over the past year, every time reaching the conclusion that it does not appear to be possible. But as the topic keeps coming back to me, I decided to just ask this great list to see if there is something I missed.

In its simplest form you could consider e.g. just a cube in a lit space, where I would like to know the minimum, maximum and average illuminance per surface of the cube (with the calculation normal perpendicular to the surface). In a more advanced execution it could be for a 3D surface such as a 3d model imported via obj2mesh.

In the past I’ve been doing the simple form by just defining the calculation grid by hand (e.g. the cnt/rcalc/rtrace method), this is quite doable. However, for more complex geometries, this would become quite an ordeal. I did see some of this functionality in one of the VI suite tutorials (blender based) I think, but I’d prefer a pure Radiance solution as that means I do not have to rely on other packages (and their speed of updating).

Any thoughts on this? For reference, I do typically use Python to automate certain elements of my workflow, so maybe there might be a solution in that…Looking forward to hearing your thoughts!

Knowing the maximum and minimum illuminances over an area requires sampling an infinite number of points. You must be looking for an approximate value, in which case you could set up a grid of sample points to do this with rtrace. Radiance doesn’t use grids per se, so you are sort of on your own for this part. Others have done it, but there’s no simple solution for general surfaces.

Another approach is to render the surface in a parallel view from different sides and use the -i option to rpict (or rtrace). You can then use the pixels as sample points, avoiding the need for your own grid. Again, this is difficult to generalize.

@Adrie_dV more or less we want to achieve the same thing as I am describing in my thread here. After discussing with Greg and with others what I’ve understood is that Radiance can calculate output values of different metrics (illuminance included) on a pixel by pixel basis, i.e. on the 2d image space. Thereafter how you choose to map these back to vertices/3d mesh depends on the user and different mapping solutions.

At the moment I am working on it so that I can create a pipeline which will kind of automatize the procedure if possible. I am new to Radiance so the learning curve for the moment is quite steep until I manage to get more comfortable with all the available tools. Whether it will work at the end, not quite sure but I am quite optimistic considering the words of Greg that others have done it in the past. Unfortunately though there are not that much of clear documentation/information regarding the steps to be followed.

Thanks Greg for your response! I indeed had the intention to create a finite number of calculation points so it would indeed be an approximation. So far I indeed did the manual setup, but from your message I understand that is indeed the only way, so my earlier suspicions are now confirmed, thanks!

@Theo I had indeed browsed through your thread aswell, but it seemed to focus more on the translation of model to Radiance (with materials) aspect, which I typically do using blender and the excellent tutorial of @Dion_Moult that you can find here and here. But I missed that you also asked if it could be output per vertex/face, and that is indeed what I was also looking into.

As said, I am using a python/radiance combination (i.e. using python to interface with the commandline). I might be able to use that to simply extract the relevant face coordinates and normal and then feed that to rtrace…

But that would need some more thinking if thats feasible/easy to implement, and always the question remains if doing it is really worth the effort…

Thanks again for the quick support!

1 Like

@Adrie_dV I am both interested for translating a model to Radiance as well as point clouds with information per vertex/faces and accordingly getting an output in that format.

I am also using a python/radiance combination, now whether it is worthing I guess experimenting is not bad :stuck_out_tongue:

In any case if you believe that I can help somehow feel free to let me know since as I said I am also interested for such a solution.

Greg, is it possible to use xform directly to transform the (6 column) list of measurement point and orientation vectors for input to rtrace - rather than mimicking the transform ‘from scratch’ using rcalc etc? In other words, if I had the (xp,…,zd) list for the faces of a cube at, say, the origin, I could transform either the cube or the list to anywhere in the scene using the same set of -t, -rx, etc arguments.

@Adrie_dV, @Theo
Maybe you would find Pyrano useful. It is a Python package for generating sensor points over surfaces (using EnergyPlus geometry) and simulating solar irradiance.

Pyrano was originally developed for simulating shading on PV modules using LiDAR point clouds without ray-tracing, but it can also run 2-Phase solar irradiance simulations with Radiance via Python.
When you generate the sensorpoints over a surface, apart from the .pts file, Pyrano also saves a .json file that keeps track of which sensor points belong to which EnergyPlus surface. With this you can tell at each timestep the min/max/avg irradiance of the surfaces. Also you can make make visualizations (see below).

I have uploaded a quick example on our GitLab page. There you can find chapel_roof.py, which is the simulation workflow and chapel_roof_irrad_sim_postprocess.py, which shows how to make visualizations and calculate surface average irradiances.

If you want to give it a try, you can install Pyrano with pip install pyrano.

Thanks Adam, definitely worth checking out, even if its just to see your python <-> radiance workflow! Not sure if I can also reuse the sensor point generation part (as I’m :not using energyplus), but lets see where the overlaps are :slight_smile:

@adambgnr I concur with @Adrie_dV your work seems quite interesting. Thanks for letting us know.

Well, it’s a bit of a hack, but you could smuggle the position and orientation in a “ring” using rcalc, pass it through xform, then get the result out with another rcalc command. Something like:

rcalc -o ring.fmt -e ‘px=$1;py=$2;pz=$3;dx=$4;dy=$5;dz=$6’ calcpts.txt | xform [options] | rcalc -i ring.fmt -e ‘$1=px;$2=py;$3=pz;$4=dx;$5=dy;$6=dz’

and “ring.fmt” contains:

void ring stowaway
0
0
8
${ px } ${ py } ${ pz }
${ dx } ${ dy } ${ dz }
0 1

Sneaky! And, by Radiance standards, that’s only a little bit of a hack :wink:

@Adrie_dV you might want to check on the Vi-Suite addon in Blender. I’ve recently discovered it and it seems that it is possible to provide a numerical result per vertices/faces on a model.