Rtrace simulation optimization

Hi,
We are looking at ways to speed up our Radiance simulations. We are calculating the irradiation at pre-defined positions in space, in an application related to photovoltaics (similar to bifacial_radiance).
Our simulation has the following characteristics:

  • Thousands of sensors in each rtrace execution, for each scene.
  • Multiple rtrace processes running concurrently, with each process simulating a different scene.

So far, we have been able to accelerate the simulation time with the following changes:

  • We changed the definition of the geometries in the scene from a meshes (.rtm) to a list of triangles (polygon primitive). We tried this after reading this post.
  • We tuned the rtrace parameters to our desired precision where any further changes yield poor results in terms of accuracy.
  • We have reduced the number of sensors to the minimum we need.
  • We communicate the inputs/outputs of rtrace throught streams with the data in binary (-fdd rtrace parameter).

We would like to know if:

  • Are there any other known optimizations which we could further use to improve the simulation time?
  • We observed a quite suboptimal speedup scaling when running multiple rtrace processes each with a different scene (e.g. 8 concurrent processes had a less than 4 speedup rate) when running them in Windows (we will try it on Linux in the future). Could there be some interference between the different processes that affects their speed?
  • We run an annual simulation where the geometry and sensors’ position/direction can change between intervals. Is there a way to do an annual simulation for this use that doesn’t involve running a rtrace for each interval?

Many thanks for your help. Best regards,
Diego

1 Like