I have a scene with more than 26,000 sources representing suns and 80 mirror like faces which work as virtual sources. The study includes more than 100,000 sensor points. In most cases with several sensors I would break down the sensors into smaller pieces and run them in parallel but it doesn’t work as well in this case.
Based on what I can see in source code and also timing the runs my understanding is that when I run an rtrace / rontrib command with
-dr 1 Radiance does calculate the shadow testing for the first sensor and then reuses the calculation for the rest of the sensors. As result when I run the study for 1 sensor it takes 10 minutes, for 100 sensors it is 12 minutes and for 200 sensors it is 15 minutes. 10 minutes for the initial calculation and then ~2 minutes for each 100 sensors.
In other words, the additional overhead for the pre-calculation is more than the benefits of parallel processing. I was hoping that I can share the shadow-testing calculations between the runs similar to how
-af filename shares ambient calculation between the runs for the same scene. Is it possible to do? and if not is there a workaround to achieve this?
I found this page on Radiance website which is inline with what I’m trying to do but I’m not sure how to add it to or use it with Radiance: Direct Cache Manual.