# Using rsensor on a grid of points

Dear all,

I’m trying to calculate illuminance on a (dynamic) grid of points in a given space, this is thoroughly explained in the Radiance Cookbook, and I have this implemented and working for quite some time. However, now I would like to do the same, but with a sensor with a limited field of view (i.e. a cut-off). Now I’ve already scoured these boards a bit and found 2 solutions of which the first is to simply create a black cone and the second using rsensor. However, the solutions I’ve seen generally only discuss its application on a single point whereas I would like to do this (dynamically) on a grid of calculation points.

The cones would of course be the most straightforward, but it would require me to ‘litter’ the model with dozens of small black cones and as I’m using this ‘shielded detector’ in multiple directions, it would require a new model for each direction of calculation.

The rsensor option seems to be the more elegant approach, however, I struggle a bit with how I would make rtrace (or rsensor directly) execute this on a grid of points.

For reference, this is a short version of my current setup to calculate horizontal illuminance. Note, no output format is specified as I ready the output directly into python.

cnt 20 30 | rcalc -e ‘\$1=\$1pitchx; \$2=\$2pitchy;\$3=workplaneheight;\$4=0;\$5=0;\$6=1’
| rtrace -n 8 -af calc.cache [rtrace options] -I -h -oov octree
| rcalc -e '\$1=\$1; \$2=\$2; \$3=\$3; \$4=179*(.265*\$4+.670*\$5+.065*\$6)

So the question, how do I/can I get this to work with a shielded sensor?

Just a slight bump + update…

I implemented the small black cone approach, but as feared, this is quite cumbersome as for each direction I use the cones in, I have to build a new objects file (with the cones in the right direction), build a new octree and recalculate the entire thing. Of course, most of this can be automated, but theoretically, the small black cones could be blocking part of the incoming light as I’m using a spacing between calculation points of 20cm (but this could be even smaller). I could of course make the cone smaller (now its 2cm high), limiting the chances and amount of blocked light, but there will always be some doubts on accuracy…

So, coming back to the rsensor approach, anybody out there who can enlighting me how I might be able to use the rsensor option on a grid of points without having to recalculate the entire scene for each grid point (preferably)?

Sorry I never saw your original post – I think Discourse e-mail issues (which have since been resolved) kept a lot of posts from going out to subscribers.

The rsensor tool is probably the right approach, you just need to create a sensor file that describes your cone distribution, which should be fairly easy. Once that’s done, you can give as many sensor directions as you like on a single rsensor command line, and it will print one result per sensor. Of course, there is a practical limit to how many sensors you can add to a command line before it gets too long, so you may need to use an alternative method.

This is going to look a bit crazy, but if you can produce a set of sensor origins and directions, you can perform your calculation like so:

``````rcalc -o 'rsensor -h -rd 10000 -vp \${\$1} \${\$2} \${\$3} -vd \${\$4} \${\$5} \${\$6} sensor.dat .' posdir.txt
| sh
| rtrace -n 8 -af calc.cache [rtrace options] -h -ov octree
| total -10000 -m
| rlam posdir.txt -
| rcalc -e '\$1=\$1;\$2=\$2;\$3=\$3;\$4=179*(.265*\$7+.670*\$8+.065*\$9)'
``````

The rcalc command generates a list of rsensor commands that are executed by the shell, whose collective output is fed into a single invocation of rtrace. The rsensor commands in this form produce ray origins and directions rather than a summed result, so they don’t do any ray-tracing at all; they just generate the needed rays. Each individual ray then has its value calculated by our rtrace command, whose results are fed into total. Its job is to sum together the number of rays corresponding to a single sensor, which we’ve set to 10000 in this example. You can of course adjust this to suit your time and accuracy trade-off.

Note also that I removed the rtrace -I option – you can’t use this if you want to average radiances. I also changed -oov to -ov, which is explained below.

The averages out of total then get remarried to the position-direction file, which is filtered by a final rcalc command to pick out what we want, which is the origin of each sensor and the photometric average.

Cheers,
-Greg

Wow…I would definitely not have been able to think of that one myself Thanks! Will start working with it.

Btw, the pipe in ‘sh’ is the pipe into the shell?

Yes, putting ’ | sh | ’ in the middle of a command-line causes the inputs to be executed as commands, whose output is then sent further down the pipe. You haven’t given any actual arguments to the shell command. (As you did, I left off all the back-slashes you need to ignore the newlines just to keep things neat, but they really should be there.)

Ok, got the code running in my environment with a test sensor file with a ‘regular’ cosine correction, just to be able to compare it to my regular output. However, the outcome is a factor 3 (pi?) lower than output of either a direct call of rsensor (with that same sensor file) of a single point or rtrace for a single point…so I must be missing something…I also tried a shielded version of that sensor file (replacing a part with 0’s) to see how close it came to my ‘cup/cone-solution’. For that test, also the single point rsensor and rtrace (with cups) were relatively close, but the grid version of rsensor was quite far off (but no longer by a factor pi) and for some odd reason, the maximum value was even higher than the non-shielded version…

For reference, these are the rsensor and rtrace I use for comparison at position 1,1,1 with an upwards aimed sensor (regular cosine correction):
rsensor -h -rd 10000 [rtrace options] -vp 1 1 1 -vd 0 0 1 cosine_test.dat octree
| rcalc -e ‘\$1=179*(.265*\$1+.670*\$2+.065*\$3)’ → results in 144 lx

echo ‘1 1 1 0 0 1’ | rtrace -I [rtrace options] -h -oov octree | rcalc -e ‘\$1=\$1; \$2=\$2; \$3=\$3; \$4=179*(.265*\$4+.670*\$5+.065*\$6)’ → results in 138 lx

And Greg’s suggested code results in 46 for that point.
rcalc -o ‘rsensor -h -rd 10000 -vp \${\$1} \${\$2} \${\$3} -vd \${\$4} \${\$5} \${\$6} cosine_test.dat .’ posdir.txt
| sh
| rtrace -n 8 -af calc.cache [rtrace options] -h -ov lum.oct
| total -{raynr} -m | rlam posdir.txt -
| rcalc -e ‘\$1=\$1;\$2=\$2;\$3=\$3;\$4=179*(.265*\$7+.670*\$8+.065*\$9)’>test.txt

Any thoughts are appreciated

(on a side note, in my code every line is nicely apendended by a \ to keep it a bit more readable )

Yes, there will be a factor of pi between the averaged output of rtrace from rsensor-generated rays and an rtrace run using the sample position and -I+ option. Keep in mind that the absolute out of rsensor is not considered important, and “irradiance” only makes sense for a full hemisphere integral. As soon as you play with the sensitivity, deviating from cosine or applying a shield, the pi “projected solid angle” factor changes to something else, and in most ways becomes irrelevant.

You mentioned your rsensor grid calculation deviating in other ways. Can you explain?

Thanks for the quick reply. What is the -|+ option you mention? I’ve seen it being mentioned before on these forums, but never found a reference on the ‘whatis’ pages.

W.r.t. the differences, when I take my original rtrace based code for the full grid and compare the output per coordinate to rsensor code for the full grid I get a ratios between 2.66 and 3.33, so not exactly pi, but I assume there is some rounding and monte-carlo-esque behavior going on here.

When I modify the sensor file to 0’s for all polar angles > 40 degrees to represent an 80 degrees field of view, and use that file to calculate the full grid, and compare that to my rtrace method + physical black cups, then I get ratios between 0.86 and 1.89…so next to the fact that the differences are quite large for these ratio’s it also indicates that sometimes its larger (ratio >1) and sometimes its even lower ( ratio <0).

I already noticed that when I calculated the maximum over all grid positions. The maximum of the rsensor grid with the field of view shielding was higher than the maximum of the rsensor grid with the regular cosine corrected sensor file (which I guess should not be possible as the max should always be the same or lower).

For reference, my sensor file with the cut-off:

``````degrees 0 90 180 270
0 1.0000 1.0000 1.0000 1.0000
10 0.9848 0.9848 0.9848 0.9848
20 0.9397 0.9397 0.9397 0.9397
30 0.8660 0.8660 0.8660 0.8660
40 0.7660 0.7660 0.7660 0.7660
50 0.0000 0.0000 0.0000 0.0000
60 0.0000 0.0000 0.0000 0.0000
70 0.0000 0.0000 0.0000 0.0000
80 0.0000 0.0000 0.0000 0.0000
90 0.0000 0.0000 0.0000 0.0000``````

The -I+ is just a more explicit way of saying “-I”. All boolean flags for the renderers toggle the setting if nothing follows, e.g., “-I -I” would turn irradiance sampling off again. Setting “-I+” explicitly turns the setting on, no matter what options preceded it. Synonyms for this include “y” or “Y”, “1”, and “t” or “T”. False settings can be any of the characters [-nN0fF]. This is covered in the rtrace and rpict man pages.

The ratio variance is most likely due to Monte Carlo sampling as you surmise. I expect the maximum value is also due to the same thing, and the larger ratios you saw are probably because some bright regions (such as windows) were being cropped by some of the sensor cones. I would have to go into your scene to analyze it further, but I trust you can do that if you are concerned about it.

Apologies for the slow response and thanks for bearing with me

For clarification, the maximum values I was comparing are between 2 versions of the rsensor custom code, one with no shielding (regular cosine corrected file) and one with shielding (same sensor file but with 0’s for the higher polar angles). With these 2 outputs, the maximum of the shielded version was about 40% higher than the output of the non-shielded one…that seems a bit too much to be due to sampling.

Anyway, diving into a bit more, I looked up the exact position of that maximum point, and calculated it with just a single rsensor command, putting the position into the vp part like so:

``````rsensor -h -rd 10000 [rtrace options] -vp 2.7 0.5 1 -vd 0 0 1 cosine_test.dat octree
| rcalc -e ‘\$1=179*(.265*\$1+.670*\$2+.065*\$3)’
``````

And I did that both with the unshielded and shielded sensor file. The output there makes more sense as the non-shielded value at that point is higher than the shielded version.

So it appears that the custom code to calculate the grid of rsensor points is doing something different than the single rsensor call…

Now one thing I was wondering about is that you already mentioned in your first response that it actually is doing something different…the rsensor part in the custom code is just to generate ray origins and directions so does it then even incorporate a cosine correction as looking at the intermediate steps, that does not seem to be transported between the different steps in the process…or does rtrace do that automatically?

Anyway, considering that I’m modeling just a simple grey box with a single luminaire (not aimed at the measuring points) combined with the above observations still makes me thing something is amiss with the custom code…I just can’t figure out what…but that in part is due to my lack of knowledge of the fundamental principles behind radiance I assume…

Good analysis, and I think your result makes sense. The difference between the two methods that leads to the custom pipeline being larger is subtle but important. Comparing A to B:

A) rsensor is used directly to evaluate the “sensor distribution.” To do this, it integrates the sensor table you have given it to get a global scaling factor, which it applies to the Monte Carlo importance samples it sends out. For a normal cosine distribution, this integrated value is pi. For a cone cut-off of a cosine distribution, it will be something less than pi.

B) rsensor generates the importance sampling rays according to the sensor table, but no global scaling is assigned to these rays. The cosine-weighting is built into the distribution of samples, i.e., there will be more rays in the direction of the normal than near the cone cut-off per solid angle. These rays then are evaluated by rtrace and averaged together without the scale factor in (A).

In summary, comparing an (A) shielded calculation to an (A) unshielded one, the unshielded will always be higher due to multiplication by the integrated sensor table. However, a (B) shielded calculation can in fact be higher than a (B) unshielded one, because there is no normalization, and you are just getting the average of all the rays you sent out.

Although it may seem concerning, bear in mind that it only makes sense to compare a sensor, however it is shielded and wherever it is placed, to the same type of sensor. So long as you don’t mix sensor types in the same calculation, the actual scaling factor should be irrelevant. If you absolutely need to mix your sensor types and compare their outputs, then you may need to call rsensor directly and not pass its sample rays to rtrace as we are doing in our hack.

-Greg