Simple question about combining rgb channels from gendaymtx before a custom GPU-accelerated script

Hi,

I’m working on a simple engine which intends to calculate the irradiance of sensors placed on vertical surfaces throughout a scene (on the order of 1e5-1e7 sensors) over the course of the year.

Context

Most of the work is happening outside of Radiance in a set of GPU kernels I have written in the Taichi acceleration framework for Python. I’m not looking for high accuracy, just reasonable estimates; the use case is for clustering regions of facades based off of their solar gains throughout a year, as well as generating a rough 8760 timeseries of solar gains for sensors that is used as input for a machine learning algorithm which predicts energy use.

My question is about a specific part of the output of gendaymtx, but I am happy to receive feedback about the process in general too!

My Process

My current approach for computing the irradiance timeseries is this:

  1. convert EPW to WEA
  2. generate an hourly tregenza sky with m reinhart subdivisons from Radiance’ gendaymtx
  3. Sum up the RGB channels (this is the step I have a question about - I think it should probably be a weighted sum of some sort)
  4. convert the sky matrix to a meridinal/parallel subdivision scheme - this is motivated by the fact that I wrote my GPU kernels assuming that’s how gendaymtx sky patches looked, before I realized the actual schema of a tregenza/reinhart sky… this was a quicker fix than tracking different azimuths per parallel band in the sky matrix
    1. For each “row” (i.e. zone between two parallels) of the reinhart sky, I just subdivide the patches (keeping the radiance the same) and aggregate patches (solid-angle weighted) in order to get the desired number of patches per zone.
    2. to keep the process simple, I essentially find the lcm of the number of sky patches desired in the row and the current number of sky patches in the row, subdivide each sky patch equally by the factor needed to get to the lcm, then group them up according to the factor needed to have the desired number of sky patches, then take the mean, which is naturally solid angle weighted since each of the patches in that row still have the same solid angle. E.g. given 72 patches in a row and a desired target of 48 patches in a row, subdivide the 72 patches each into 2 patches (keeping the same radiance as the parent) and then group them up every 3 patches, take the mean, and now you have 48 patches.
  5. Compute the solid angles W of each of the new sky patches (so really just one solid angle per parallel band, since all patches within the band have the same solid angle)
  6. For each sky patch and each timestep, compute the irradiance E of a surface whose normal is pointing at the centroid of a sky patch as Radiance * Solid Angle of said sky patch at that timestep
  7. For every sensor in my scene, emit a ray towards the centroid of each sky patch (ignoring half the sky patches behind the surface, also ignoring the zenith and ground), record whether it hits the sky patch or not. If it hits another surface in the scene, for now I’m just terminating the ray (i.e. no bounces/all surfaces are black holes)
  8. For every sensor, I know now which sky patches it sees, so I then…
  9. Create timeseries of each sensor’s irradiance
    1. compute the angle of incidence between every ray that hits a sky patch and the surface normal of the surface which the sensor belongs to; cosine of that angle gives a scaling factor to be applied to the irradiance which would be received by a surface that is normal to that ray.
    2. Now I have angle-of-incidence scaling factors for every ray that sees a sky patch from a given sensor, so it is now easy to timestep over the course of the year and accumulate irradiance per timestep per sensor by summing over the rays that hit sky patches, taking the product of the irradiance of a srf normal to that sky patch at that timestep and the corresponding a.o.i. scaling factor as the terms in the summation.

I think my approach is sound (doing real-time visualization of it certainly looks approximately correct) and its working fast which is all I really care about at the moment, but if there are any glaring errors, happy to have them pointed out! I’m new to writing a ray tracer/computing radiometric stuff so maybe I have missed something obvious.

My Question

Anyways, the one thing I was a little unclear on is the RGB summation in step 3. I could do the whole process tracking RGB values separately, but it’s not necessary for what I care about so would prefer to just combine those values at the start.

I wasn’t sure if I should be summing those three values or taking their mean, or a weighted sum, in terms of the physics interpretation. In my head, I normally would be thinking of radiance as just having a single value of W/m2/sr per sky patch, but obviously it makes sense that you can split it up into different spectral components. I’m not sure what the proper scheme would be, assuming what I am interested in approximating is the solar gains a sensor on the side of a building would receive.

At the end of the day it doesn’t matter so much to me that I get true units, since further along the pipeline all values get normalized to 0-1 anyways so they lose physically units, but I still wanted to make sure I am not doing anything crazy by just summing them up. They do probably need some sort of weighting factors I would think…

Hi @szvsw,

Welcome to the Radiance discourse forum. It’s possible that you are overcomplicating your script, but there are several simple approaches you could take. In Radiance, the three color channels can represent whatever you want them to, so you could run gendaymtx with the -c 1 1 1 argument to make all three color channels equal, and then ignore two of them (or take the mean, accomplishing the same thing). If you are interested in perception of visible light, you could use a weighting that matches human vision (0.265 * r + 0.670 * g + 0.065 * b). Since you’re using GPU computation, there’s no overhead to using a float3 instead of a regular float, so you could also maintain the three channels through your calculations at no cost.

There’s also an open-source implementation of the Reinhart sky subdivision for GPU that you could use instead of your meridinal scheme, which you can find here.

Probably! :nerd_face: But it has been a good learning process.

I had written my kernels and a bunch of data structures to parallelize over the number of elevations and the number of azimuths, assuming that the number of azimuths was the same per elevation. Oops. I figured it would be an easier fix to just create a meridinal scheme for the sky… at least it forced me to learn a bit more about sky patches!

I do realize now from looking at the function that you linked that it might be simpler to keep the rays as they currently are, but then compute the the sky patch bin that their azimuth/angle would correspond to within a reinhart sky, instead of having a unique sky patch for each ray… I guess I will implement that too now as a little validation check for my subdivision/regrouping.

Ah, I think I missed the -c flag. I thought that the three channels were somehow being computed separately from data in the EPW/WEA files, but it sounds like there is really only a single value derived from the provided data; then the -c flag provides the weights which you can set as desired to correspond to some frequency response of whatever system it is you might be interested in, etc. I guess since I am just interested in clustering regions of facades based off of their solar exposure for the purposes of thermal/energy modeling (for use in an UMI style shoeboxer, or for use as part of an input vector for a CNN), it sounds like just setting -c 1 1 1 and then taking the first channel will work for me!

Part of my motivation of writing my own engine for the ray tracing step was the assumption that all geometry is 2.5D and starts on the ground, which at least somewhat simplifies some of the ray tracing I think.

Hi @szvsw,

For the type of computation you were thinking of, you should be aware of the -O{0|1} flag which specifies whether gendaymtx computes visible or total solar radiation from the WEA file. The -c flag only determines color by scaling the color channels. This is useful to color the sky for visualizations, but less useful for total solar radiation calculations. Note that the default values when applied to the human vision weighting factors, sum to 1. Thus, there’s some benefit to using those weightings to guard against a future case where you do use multiple channels.

Yep I caught that one!

PS we actually met briefly when you presented in the BT Lab conference room (I am a student of Christoph’s) last term - would be great to chat if you are stopping by the lab again sometime soon, as there are lots of things I would love to pick your brain about in re: ray tracing / radiance!

Sam