I’m working on a simple engine which intends to calculate the irradiance of sensors placed on vertical surfaces throughout a scene (on the order of 1e5-1e7 sensors) over the course of the year.
Most of the work is happening outside of Radiance in a set of GPU kernels I have written in the Taichi acceleration framework for Python. I’m not looking for high accuracy, just reasonable estimates; the use case is for clustering regions of facades based off of their solar gains throughout a year, as well as generating a rough 8760 timeseries of solar gains for sensors that is used as input for a machine learning algorithm which predicts energy use.
My question is about a specific part of the output of gendaymtx, but I am happy to receive feedback about the process in general too!
My current approach for computing the irradiance timeseries is this:
- convert EPW to WEA
- generate an hourly tregenza sky with m reinhart subdivisons from Radiance’
- Sum up the RGB channels (this is the step I have a question about - I think it should probably be a weighted sum of some sort)
- convert the sky matrix to a meridinal/parallel subdivision scheme - this is motivated by the fact that I wrote my GPU kernels assuming that’s how gendaymtx sky patches looked, before I realized the actual schema of a tregenza/reinhart sky… this was a quicker fix than tracking different azimuths per parallel band in the sky matrix
- For each “row” (i.e. zone between two parallels) of the reinhart sky, I just subdivide the patches (keeping the radiance the same) and aggregate patches (solid-angle weighted) in order to get the desired number of patches per zone.
- to keep the process simple, I essentially find the lcm of the number of sky patches desired in the row and the current number of sky patches in the row, subdivide each sky patch equally by the factor needed to get to the lcm, then group them up according to the factor needed to have the desired number of sky patches, then take the mean, which is naturally solid angle weighted since each of the patches in that row still have the same solid angle. E.g. given 72 patches in a row and a desired target of 48 patches in a row, subdivide the 72 patches each into 2 patches (keeping the same radiance as the parent) and then group them up every 3 patches, take the mean, and now you have 48 patches.
- Compute the solid angles
Wof each of the new sky patches (so really just one solid angle per parallel band, since all patches within the band have the same solid angle)
- For each sky patch and each timestep, compute the irradiance
Eof a surface whose normal is pointing at the centroid of a sky patch as Radiance * Solid Angle of said sky patch at that timestep
- For every sensor in my scene, emit a ray towards the centroid of each sky patch (ignoring half the sky patches behind the surface, also ignoring the zenith and ground), record whether it hits the sky patch or not. If it hits another surface in the scene, for now I’m just terminating the ray (i.e. no bounces/all surfaces are black holes)
- For every sensor, I know now which sky patches it sees, so I then…
- Create timeseries of each sensor’s irradiance
- compute the angle of incidence between every ray that hits a sky patch and the surface normal of the surface which the sensor belongs to; cosine of that angle gives a scaling factor to be applied to the irradiance which would be received by a surface that is normal to that ray.
- Now I have angle-of-incidence scaling factors for every ray that sees a sky patch from a given sensor, so it is now easy to timestep over the course of the year and accumulate irradiance per timestep per sensor by summing over the rays that hit sky patches, taking the product of the irradiance of a srf normal to that sky patch at that timestep and the corresponding a.o.i. scaling factor as the terms in the summation.
I think my approach is sound (doing real-time visualization of it certainly looks approximately correct) and its working fast which is all I really care about at the moment, but if there are any glaring errors, happy to have them pointed out! I’m new to writing a ray tracer/computing radiometric stuff so maybe I have missed something obvious.
Anyways, the one thing I was a little unclear on is the RGB summation in step 3. I could do the whole process tracking RGB values separately, but it’s not necessary for what I care about so would prefer to just combine those values at the start.
I wasn’t sure if I should be summing those three values or taking their mean, or a weighted sum, in terms of the physics interpretation. In my head, I normally would be thinking of radiance as just having a single value of W/m2/sr per sky patch, but obviously it makes sense that you can split it up into different spectral components. I’m not sure what the proper scheme would be, assuming what I am interested in approximating is the solar gains a sensor on the side of a building would receive.
At the end of the day it doesn’t matter so much to me that I get true units, since further along the pipeline all values get normalized to 0-1 anyways so they lose physically units, but I still wanted to make sure I am not doing anything crazy by just summing them up. They do probably need some sort of weighting factors I would think…