I am using rtrace to calculate the sky view factor for points in an urban environment.
Using the approach detailed in other posts, I pipe the coordinates of a point and a vertical vector ( 0 0 1) with the -I flag set for rtrace.
The sky description is: void glow sky_glow 0 0 4 0.3183098886 0.3183098886 0.3183098886 0 sky_glow source sky 0 0 4 0 0 1 180.
I was wondering how the precision of this approach compares to other methods like the ones employed in ladybug.
Appreciate the time.
I assume you’re using -ab 1 to get just the irradiance from the directly visible sky. In which case, switch off ambient interpolation (-aa 0) to ensure hemispherical sampling occurs at each point supplied to rtrace. Then, using one of your scenes, test for convergence in the results by progressively increasing -ad by a factor of 2 each time (setting -as to, say, half the -ad value). Maybe start with -ad 128. Up to you what is acceptable accuracy, but the convergence should be evident.
Thanks for the insight. It helped a lot.
PS: Appreciate your “Ambient Calculation: Crash Course”. Great stuff. Thanks