I just started using radiance and I think this is a very basic question.
I am exploring the relationship between -ad, -ab, -as, -aa, -ar and computational accuracy and computational load in order to find the appropriate parameter settings for lighting designers to simulate.
As a test model, we assume a simple building of 5m10m2.8m with only one window on the south side, and set the observation point at (2.5,5,0.7).
I set the above parameters appropriately, and tried to get a trend by looking at the number and location of sample points using lookamb.
At first, when I set ad = 2, ab = 1, which is a very small number of sample points, I assumed that the ray from the observation point would hit the surface of the room at 2^2 = 4 points, and that the information from 4 sample points would be output.
However, when ar=2, aa=0.5, the number of sample points was more than 4, about 8.
I assume this is due to the IC being done and the cache points being added.
In response to these, here are my questions.
What is the calculation algorithm for the ray being fired and the cache point being determined?
Here is a flowchart of what I am assuming. Is this correct? I’m sure it’s more complicated than that.
When adding a new cache point by IC, how do you determine its position?
What kind of algorithm is used for super sampling (-as)?
I imagine dividing the hemisphere that fires the ray into patches, and sending new rays between patches with large changes in brightness.
What is the u-vector output by lookamb?