Luminance normation in gendaylit

Hi,

for better understanding of gendaylit I checked out the code but there is something I dont understand:

double integ_lv(float *lv,float *theta)
{
	int i;
	double buffer=0.0;
	
	for (i=0;i<145;i++)
	{
		buffer += (*(lv+i))*cos(radians(*(theta+i)));
	}
			
	return buffer*(2.*M_PI/145.);

Why do you assume that the surface area of every tregenza sky patch is the same?

When Im calculating the surface area

{isplaystyle mega =nt imits _{arphi _{1}}^{arphi _{2}}nt imits _{amma _{1}}^{amma _{2}}in(amma )athrm {d} amma athrm {d} arphi }
(surface area equals solid angle with r=1) of every patch I get the following result:

Greetings Philip

The formula is not readable therefore here is a screenshot:
Formula

As you show, the solid angle is not the same for every Tregenza patch. This routine seems to introduce a slight bias towards smaller sky patches, though I don’t imagine the error amounts to much. Maybe @Jan_Wienold or @Wendelin_Sprenger would care to comment.

Best,
-Greg

Thanks @Greg_Ward for the fast reply. I have found something else that I do not understand. When calculating gamma you use the following function:

/* calculation of the angles dzeta and gamma */
void theta_phi_to_dzeta_gamma(double theta,double phi,double *dzeta,double *gamma, double Z)
{
	*dzeta = theta; /* dzeta = phi */
	if ( (cos(Z)*cos(theta)+sin(Z)*sin(theta)*cos(phi)) > 1 && (cos(Z)*cos(theta)+sin(Z)*sin(theta)*cos(phi) < 1.1 ) )
		*gamma = 0;
	else if ( (cos(Z)*cos(theta)+sin(Z)*sin(theta)*cos(phi)) > 1.1 )
	{
		printf("error in calculation of gamma (angle between point and sun");
		exit(1);
	}
	else
		*gamma = acos(cos(Z)*cos(theta)+sin(Z)*sin(theta)*cos(phi));
}

I dont understand why you use the patch azimuth (phi) when the literature (CIE Norm, Perez All-Weather-Paper) says that the

angle between a sky element and the position of the sun

should be used. So in my understand it should be something like this:

*gamma = acos(cos(Z)*cos(theta)+sin(Z)*sin(theta)*cos(|azimuth_sun - phi|));

In my understanding working only with phi should lead to the fact that the maxium irradiance when raytracing for example a half sphere is at the wrong azimuth angle.


But somehow you get the maxima at the right azimuth angle. Do you rotate the sky dome somewhere or is there a different trick?

Greetings Philip

Hi Philip,

This is likely a trigonometric equivalence, but I don’t know the details. Again, I am hoping the authors of gendaylit will respond.

Cheers,
-Greg

Hey,

the answer to my question is that in gendaylit.c only the Normalization gets calculated therefore the azimuth_sun orientation doesnt matter. In perezlum.cal the luminance is calculated for every ray direction. In here the right angle is used.

But maybe the right angle and right surface area should be used for the Normalization in gendaylit too because even if the deviation may be small, the change is not complex.

However, I also dont understand why in perezlum.cal the skybrightness is not the perez-luminance (called intersky in perezlum.cal), but a weighted average value between intersky and ground brightness. Is the ground brightness not automatically taken into account, due to reflections during ray tracing.

Greetings Philip

Ah – this question I actually do know the answer to! The “wmean” interpolation is borrowed from skybright.cal to soften the edge between the sky and the ground. It has little to no effect well above and below the horizon, but at the horizon makes a smooth blend between the ground level and calculated sky intensity.

Without this smoothing function, the horizon is a sharp line, which can show up in certain simulation conditions and is extremely unnatural. It would have to be a perfectly clear day on the ocean for there to be such a sharp distinction between sky and ground (or water in this example).

Cheers,
-Greg

Thanks for the answer @Greg_Ward . In the last few weeks I played a bit around with radiance and today I found something which I dont quite get. When using the following sky file:

# gendaylit -ang 43.602959895825734 13.827349644205526 -W 233.31 330.0 -O 1 -s
# Local solar time: 0.00
# Solar altitude and azimuth: 43.6 13.8

void brightfunc skyfunc
2 skybright perezlum.cal
0
10 6.061e+01 1.789e+01 -0.997167 -1.066106 10.927049 -3.159657 0.103425 -0.173066 -0.703151 0.689657 

skyfunc glow sky_glow
0
0
4 1 1 1 0

sky_glow source sky
0
0
4 0 0 1 180

and the following detector file:

0 0 1 0.0 0.0 1.0

I expected the measured irradiance by the detector to be 330 W/m^2 but the detector measures around 324 W/m^2. I used the following rtrace command:

rtrace -I -ad 1000000 -as 500000 -aa 0 -ab 2 '!oconv sky.rad' < detector.dat > output.dat

Is the deviation a result of the approximations we discussed or did I something wrong?

Greetings Philip

This roughly 2% error could be introduced by the Perez sky model or the way it’s calculated, or it could be from the rtrace random sampling. I would try the following command instead:

rtrace -I -ad 100000 -lw 1e-6 -ab 1 -aa 0 ‘!oconv sky.rad’ < detector.dat

Without the “-lw” setting, your number of “-ad” samples is curtailed to 10,000 rays with the default rtrace settings (which are available using “rtrace -defaults”).

Cheers,
-Greg