Evaluation

Hi,

triggered by Jan's and Richard's problem of evaluating glare, I was wondering how the architectual & lighting community (obviously I'm not a member of it) uses RADIANCE. I understand, you simulate your buildings and assess various lighting-related factors.

a) how do you know that the results you come up with 'reflect the true values' (I assume 'true' is +/- an error)? I agree the simulation results are not completely out of order in terms of luminance, otherwise people wouldn't use RADIANCE.

b) once a buliding has been built, has anyone gone back inside the office they simulated and obtained measurements to compare with their simulation results?

c) what magnitude of error is acceptable for your work?

d) I've come across two opposing views on the accuracy of lumenaire descriptor files provided by manufacturers. One states that these can be off quite a bit (I think I read that in the 'Rendering with Radiance' book) and other authors strut how careful and accurate their simulation is by using manufacturer-provided lumenaire descriptors.

Sorry, if these questions sound rather trivial, but answers are highly appreciated.

Cheers,
Alexa

Jan Wienold wrote:

···

Hi Richard and rest of community,

we are actually working on a research project, dealing with user assessments with special focus on glare from daylight in office spaces.
We have already tested 100 subjects at two different locations under very different conditions (facade systems, window sizes, viewing directions...).

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dr. Alexa I. Ruppertsberg
Department of Optometry
University of Bradford
Bradford
BD7 1DP
UK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Hi Alexa,

triggered by Jan's and Richard's problem of evaluating glare, I was
wondering how the architectual & lighting community (obviously I'm not a
member of it) uses RADIANCE.

OK, so I guess I'll go first. But I'm not standing out here all by
myself; I'd love to hear from the rest of you...

a) how do you know that the results you come up with 'reflect the true
values' (I assume 'true' is +/- an error)? I agree the simulation
results are not completely out of order in terms of luminance, otherwise
people wouldn't use RADIANCE.

Good question. Radiance itself has been the subject of many validation
studies, and has been proven to be quite capable of coming up with the
"true values" for most scenes, assuming valid, high-quality input.
There's the rub though; skies are variable, and every project brings with
it new materials -- often materials unavailable for accurate sampling at
the time of the simulation. So, often I *don't* know I'm looking at "true
values", but I do know (hope) that the values are close enough with which
to make evaluations. Many times, we are evaluating several different
schemes, and when they are all simulated in Radiance with the same kind of
what I call "accuracy settings" -- you know, the myriad values used for
rpict & rtrace -- I know for certain that I can say scheme-a is (insert
criteria here, brightness, uniformity, whathaveyou) than scheme-b. Often
this is all that is needed, is for Radiance to guide us in a direction
that can be explored more fully, either with Radiance or with physical
mockups.

But yes, the temptation is there, to treat the numbers generated by
Radiance as THE numbers. I have to fight it all the time; I submit a
report showing 290 Lux on a plan, and people go "oh, this doesn't work, we
can't have more than 270 Lux there." That's my cue to ease into the
discussion of how a mathematical model of the sky's luminance distribution
is NOT /the sky's/ luminance distribution, etc.

b) once a buliding has been built, has anyone gone back inside the
office they simulated and obtained measurements to compare with their
simulation results?

Many of the validation studies do just that. My first big project
simulated with Radiance is still under construction, but we have done
similar tests with projects simulated with Lightscape and AGI and have
been generally pleased with the outcome. Typically, the light levels are
not the same, but neither is the real space as compared to the simulation
model. But the values are all in the ballpark and the clients have been
happy. Indeed, the last big museum project I did with Lightscape at my
previous firm was astonishingly accurate, I believe the light levels on
the day my boss measured them were within 5% of the caluclation. But I
also know a thing or two about luck. I don't tell clients to expect 5%
accuracy and neither should you. Barring luck, the only way to get that
close is to do a simulation with measured sky data (and take readings of
the space under that same sky that you are measuring). Right, John M.?
This of course requires a finished building, which sorta misses the point
of the simulation! But John's thesis work provides the basis for many of
us using Radiance to achieve real restful sleep at night. =8-)

c) what magnitude of error is acceptable for your work?

Ian Ashdown says it better than I can, in his (excellent) "Thinking
Photometrically" coursenotes:

"As for daylighting calculations, it is likely that Jongewaard (1993) is
correct – the results are only as accurate as the accuracy of the input
data. Done with care, it should be possible to obtain ±20 percent accuracy
in the photometric predictions. However, this requires detailed knowledge
and accurate modeling of both the indoor and outdoor environments. If this
cannot be done, it may be advisable to walk softly and carry a calibrated
photometer."

d) I've come across two opposing views on the accuracy of lumenaire
descriptor files provided by manufacturers. One states that these can be
off quite a bit (I think I read that in the 'Rendering with Radiance'
book) and other authors strut how careful and accurate their simulation
is by using manufacturer-provided lumenaire descriptors.

Photometry data from the manufacturers is a far better way to describe the
performance of a luminaire than most of the built-in tools in simulation
programs. But yes, there are still problems. Primarily, the issue of
far-field photometry. Linear cove fixtures are trated as point sources
when photometred, and misuse of these IES files in a simulation can lead
to very inaccurate simulations. Of course in Radiance you can increase
the -ds value to at least help the situation, by taking that "point"
distribution and sort-of arraying it along the fixture's axis. As long as
the distribution is the same along the length of the luminaire, and your
-ds is suitably fine, you can get good results this way with
manufacturer-supplied data. The other big lighting simulation packages
like AGI & Lumen Micro (and dear departed Lightscape) also allow you to do
this, in their own ways.

But sometimes the boast of accuracy simply because manufacturer-supplied
photometry is being used should be a warning sign... I recently received
a mailer from one of the manufacturers of a popular lighting simulation
program, featuring a rendering on it that was supposed to impress upon me
how amazing and accurate the software is. The thing is, the linear
uplight pendants in the image were casting this ridiculous round spot on
the ceiling, bearing no resemblance to the linear nature of the fixture --
in fact, it looked a heck of a lot like the operator knew nothing about
far-field photometry and the workarounds one must use when using
photometry files based on that method. And this was the featured
rendering for the product's promotional literature -- worse, the rendering
was created by one of the company's in-house tech support/training people.
(!)

I think there is a big naivete in the industry -- when you get beyond this
group, who is obviously much more concerned with accuracy -- when it comes
to these photometry files, many designers just download the files and plug
them into their programs and hit the "do my job" button. In fact, these
files are really just ASCII dumps of a test report, a test that used a
certain lamp, with a certain lumen depreciation factor, which may be
different than the one in your spec; other light loss factors need to be
considered, the orientation may not even be what you expected. So I guess
it just goes back to garbage in, garbage out. Those manufacturer-supplied
files are only as good as the person integrating them into the simulation.

- Rob Guglielmetti
www.rumblestrip.org

Hi Rob and Alexa,

I will follow-on Rob's excellent comments with a few thoughts of my own.

As Rob has indicated the "accuracy" of a given simulation is highly dependent on the accuracy of the input data (geometry, materials and lighting). But I think that accuracy also has to be evaluated in terms of project scope/objectives. A simulation project early in the design process will necessarily have less resolved information to work with (i.e. decision have not been made on a lot of things), whereas a simulation later in the design process will potentially have a different degree of accuracy due to design decisions at that point.

Another thing to consider is what is being compared against, while the real world is what we ultimately try to measure against, studying design scenarios is another way to use a simulation tool such a radiance. In comparative studies, that is one design scenario to another, most things can be held as constant with a few things varying (geometry, materials, or lighting). The question I would be inclined to ask in this case is whether we can make reasonable judgments about design scenarios even though we may not be using the most accurate data (for a variety of reasons), for example if we are using a simple sky model (e.g. gensky) can we still make reasonable design judgments. I would suggest that even with unknowns or limited data performing a radiance based simulation is going to be far more useful from a design evaluation standpoint that using one of the multitude of shrink wrap renderers that are out in the market.

Best,

-Jack de Valpine

Rob Guglielmetti wrote:

···

Hi Alexa,

triggered by Jan's and Richard's problem of evaluating glare, I was
wondering how the architectual & lighting community (obviously I'm not a
member of it) uses RADIANCE.
   
OK, so I guess I'll go first. But I'm not standing out here all by
myself; I'd love to hear from the rest of you...

a) how do you know that the results you come up with 'reflect the true
values' (I assume 'true' is +/- an error)? I agree the simulation
results are not completely out of order in terms of luminance, otherwise
people wouldn't use RADIANCE.
   
Good question. Radiance itself has been the subject of many validation
studies, and has been proven to be quite capable of coming up with the
"true values" for most scenes, assuming valid, high-quality input. There's the rub though; skies are variable, and every project brings with
it new materials -- often materials unavailable for accurate sampling at
the time of the simulation. So, often I *don't* know I'm looking at "true
values", but I do know (hope) that the values are close enough with which
to make evaluations. Many times, we are evaluating several different
schemes, and when they are all simulated in Radiance with the same kind of
what I call "accuracy settings" -- you know, the myriad values used for
rpict & rtrace -- I know for certain that I can say scheme-a is (insert
criteria here, brightness, uniformity, whathaveyou) than scheme-b. Often
this is all that is needed, is for Radiance to guide us in a direction
that can be explored more fully, either with Radiance or with physical
mockups.

But yes, the temptation is there, to treat the numbers generated by
Radiance as THE numbers. I have to fight it all the time; I submit a
report showing 290 Lux on a plan, and people go "oh, this doesn't work, we
can't have more than 270 Lux there." That's my cue to ease into the
discussion of how a mathematical model of the sky's luminance distribution
is NOT /the sky's/ luminance distribution, etc.

b) once a buliding has been built, has anyone gone back inside the
office they simulated and obtained measurements to compare with their
simulation results?
   
Many of the validation studies do just that. My first big project
simulated with Radiance is still under construction, but we have done
similar tests with projects simulated with Lightscape and AGI and have
been generally pleased with the outcome. Typically, the light levels are
not the same, but neither is the real space as compared to the simulation
model. But the values are all in the ballpark and the clients have been
happy. Indeed, the last big museum project I did with Lightscape at my
previous firm was astonishingly accurate, I believe the light levels on
the day my boss measured them were within 5% of the caluclation. But I
also know a thing or two about luck. I don't tell clients to expect 5%
accuracy and neither should you. Barring luck, the only way to get that
close is to do a simulation with measured sky data (and take readings of
the space under that same sky that you are measuring). Right, John M.? This of course requires a finished building, which sorta misses the point
of the simulation! But John's thesis work provides the basis for many of
us using Radiance to achieve real restful sleep at night. =8-)

c) what magnitude of error is acceptable for your work?
   
Ian Ashdown says it better than I can, in his (excellent) "Thinking
Photometrically" coursenotes:

"As for daylighting calculations, it is likely that Jongewaard (1993) is
correct - the results are only as accurate as the accuracy of the input
data. Done with care, it should be possible to obtain �20 percent accuracy
in the photometric predictions. However, this requires detailed knowledge
and accurate modeling of both the indoor and outdoor environments. If this
cannot be done, it may be advisable to walk softly and carry a calibrated
photometer."

d) I've come across two opposing views on the accuracy of lumenaire
descriptor files provided by manufacturers. One states that these can be
off quite a bit (I think I read that in the 'Rendering with Radiance'
book) and other authors strut how careful and accurate their simulation
is by using manufacturer-provided lumenaire descriptors.
   
Photometry data from the manufacturers is a far better way to describe the
performance of a luminaire than most of the built-in tools in simulation
programs. But yes, there are still problems. Primarily, the issue of
far-field photometry. Linear cove fixtures are trated as point sources
when photometred, and misuse of these IES files in a simulation can lead
to very inaccurate simulations. Of course in Radiance you can increase
the -ds value to at least help the situation, by taking that "point"
distribution and sort-of arraying it along the fixture's axis. As long as
the distribution is the same along the length of the luminaire, and your
-ds is suitably fine, you can get good results this way with
manufacturer-supplied data. The other big lighting simulation packages
like AGI & Lumen Micro (and dear departed Lightscape) also allow you to do
this, in their own ways.

But sometimes the boast of accuracy simply because manufacturer-supplied
photometry is being used should be a warning sign... I recently received
a mailer from one of the manufacturers of a popular lighting simulation
program, featuring a rendering on it that was supposed to impress upon me
how amazing and accurate the software is. The thing is, the linear
uplight pendants in the image were casting this ridiculous round spot on
the ceiling, bearing no resemblance to the linear nature of the fixture --
in fact, it looked a heck of a lot like the operator knew nothing about
far-field photometry and the workarounds one must use when using
photometry files based on that method. And this was the featured
rendering for the product's promotional literature -- worse, the rendering
was created by one of the company's in-house tech support/training people.
(!)

I think there is a big naivete in the industry -- when you get beyond this
group, who is obviously much more concerned with accuracy -- when it comes
to these photometry files, many designers just download the files and plug
them into their programs and hit the "do my job" button. In fact, these
files are really just ASCII dumps of a test report, a test that used a
certain lamp, with a certain lumen depreciation factor, which may be
different than the one in your spec; other light loss factors need to be
considered, the orientation may not even be what you expected. So I guess
it just goes back to garbage in, garbage out. Those manufacturer-supplied
files are only as good as the person integrating them into the simulation.

- Rob Guglielmetti
www.rumblestrip.org

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

For me, the most important determinant of accuracy (besides accurate input) is the number of interreflections, the ambient density and extensive use of mkillum. 5 ambient bounces is pretty much a must. This has been known ever since people started writing lighting programs. -ad =4096 can't hurt, either.

Martin Moeck

···

-----Original Message-----
From: Jack de Valpine [mailto:[email protected]]
Sent: Thu 1/20/2005 1:27 PM
To: Radiance general discussion
Cc:
Subject: Re: [Radiance-general] Evaluation
Hi Rob and Alexa,

I will follow-on Rob's excellent comments with a few thoughts of my own.

As Rob has indicated the "accuracy" of a given simulation is highly
dependent on the accuracy of the input data (geometry, materials and
lighting). But I think that accuracy also has to be evaluated in terms
of project scope/objectives. A simulation project early in the design
process will necessarily have less resolved information to work with
(i.e. decision have not been made on a lot of things), whereas a
simulation later in the design process will potentially have a different
degree of accuracy due to design decisions at that point.

Another thing to consider is what is being compared against, while the
real world is what we ultimately try to measure against, studying design
scenarios is another way to use a simulation tool such a radiance. In
comparative studies, that is one design scenario to another, most things
can be held as constant with a few things varying (geometry, materials,
or lighting). The question I would be inclined to ask in this case is
whether we can make reasonable judgments about design scenarios even
though we may not be using the most accurate data (for a variety of
reasons), for example if we are using a simple sky model (e.g. gensky)
can we still make reasonable design judgments. I would suggest that even
with unknowns or limited data performing a radiance based simulation is
going to be far more useful from a design evaluation standpoint that
using one of the multitude of shrink wrap renderers that are out in the
market.

Best,

-Jack de Valpine

Rob Guglielmetti wrote:

Hi Alexa,

triggered by Jan's and Richard's problem of evaluating glare, I was
wondering how the architectual & lighting community (obviously I'm not a
member of it) uses RADIANCE.
   
OK, so I guess I'll go first. But I'm not standing out here all by
myself; I'd love to hear from the rest of you...

a) how do you know that the results you come up with 'reflect the true
values' (I assume 'true' is +/- an error)? I agree the simulation
results are not completely out of order in terms of luminance, otherwise
people wouldn't use RADIANCE.
   
Good question. Radiance itself has been the subject of many validation
studies, and has been proven to be quite capable of coming up with the
"true values" for most scenes, assuming valid, high-quality input.
There's the rub though; skies are variable, and every project brings with
it new materials -- often materials unavailable for accurate sampling at
the time of the simulation. So, often I *don't* know I'm looking at "true
values", but I do know (hope) that the values are close enough with which
to make evaluations. Many times, we are evaluating several different
schemes, and when they are all simulated in Radiance with the same kind of
what I call "accuracy settings" -- you know, the myriad values used for
rpict & rtrace -- I know for certain that I can say scheme-a is (insert
criteria here, brightness, uniformity, whathaveyou) than scheme-b. Often
this is all that is needed, is for Radiance to guide us in a direction
that can be explored more fully, either with Radiance or with physical
mockups.

But yes, the temptation is there, to treat the numbers generated by
Radiance as THE numbers. I have to fight it all the time; I submit a
report showing 290 Lux on a plan, and people go "oh, this doesn't work, we
can't have more than 270 Lux there." That's my cue to ease into the
discussion of how a mathematical model of the sky's luminance distribution
is NOT /the sky's/ luminance distribution, etc.

b) once a buliding has been built, has anyone gone back inside the
office they simulated and obtained measurements to compare with their
simulation results?
   
Many of the validation studies do just that. My first big project
simulated with Radiance is still under construction, but we have done
similar tests with projects simulated with Lightscape and AGI and have
been generally pleased with the outcome. Typically, the light levels are
not the same, but neither is the real space as compared to the simulation
model. But the values are all in the ballpark and the clients have been
happy. Indeed, the last big museum project I did with Lightscape at my
previous firm was astonishingly accurate, I believe the light levels on
the day my boss measured them were within 5% of the caluclation. But I
also know a thing or two about luck. I don't tell clients to expect 5%
accuracy and neither should you. Barring luck, the only way to get that
close is to do a simulation with measured sky data (and take readings of
the space under that same sky that you are measuring). Right, John M.?
This of course requires a finished building, which sorta misses the point
of the simulation! But John's thesis work provides the basis for many of
us using Radiance to achieve real restful sleep at night. =8-)

c) what magnitude of error is acceptable for your work?
   
Ian Ashdown says it better than I can, in his (excellent) "Thinking
Photometrically" coursenotes:

"As for daylighting calculations, it is likely that Jongewaard (1993) is
correct - the results are only as accurate as the accuracy of the input
data. Done with care, it should be possible to obtain ±20 percent accuracy
in the photometric predictions. However, this requires detailed knowledge
and accurate modeling of both the indoor and outdoor environments. If this
cannot be done, it may be advisable to walk softly and carry a calibrated
photometer."

d) I've come across two opposing views on the accuracy of lumenaire
descriptor files provided by manufacturers. One states that these can be
off quite a bit (I think I read that in the 'Rendering with Radiance'
book) and other authors strut how careful and accurate their simulation
is by using manufacturer-provided lumenaire descriptors.
   
Photometry data from the manufacturers is a far better way to describe the
performance of a luminaire than most of the built-in tools in simulation
programs. But yes, there are still problems. Primarily, the issue of
far-field photometry. Linear cove fixtures are trated as point sources
when photometred, and misuse of these IES files in a simulation can lead
to very inaccurate simulations. Of course in Radiance you can increase
the -ds value to at least help the situation, by taking that "point"
distribution and sort-of arraying it along the fixture's axis. As long as
the distribution is the same along the length of the luminaire, and your
-ds is suitably fine, you can get good results this way with
manufacturer-supplied data. The other big lighting simulation packages
like AGI & Lumen Micro (and dear departed Lightscape) also allow you to do
this, in their own ways.

But sometimes the boast of accuracy simply because manufacturer-supplied
photometry is being used should be a warning sign... I recently received
a mailer from one of the manufacturers of a popular lighting simulation
program, featuring a rendering on it that was supposed to impress upon me
how amazing and accurate the software is. The thing is, the linear
uplight pendants in the image were casting this ridiculous round spot on
the ceiling, bearing no resemblance to the linear nature of the fixture --
in fact, it looked a heck of a lot like the operator knew nothing about
far-field photometry and the workarounds one must use when using
photometry files based on that method. And this was the featured
rendering for the product's promotional literature -- worse, the rendering
was created by one of the company's in-house tech support/training people.
(!)

I think there is a big naivete in the industry -- when you get beyond this
group, who is obviously much more concerned with accuracy -- when it comes
to these photometry files, many designers just download the files and plug
them into their programs and hit the "do my job" button. In fact, these
files are really just ASCII dumps of a test report, a test that used a
certain lamp, with a certain lumen depreciation factor, which may be
different than the one in your spec; other light loss factors need to be
considered, the orientation may not even be what you expected. So I guess
it just goes back to garbage in, garbage out. Those manufacturer-supplied
files are only as good as the person integrating them into the simulation.

- Rob Guglielmetti
www.rumblestrip.org

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Dear Alexa,

..., I was wondering how the architectual & lighting community (obviously

I'm not a member of it) uses RADIANCE.

Within the framewor of IEA Task 31, Daylighting Buildings in the 21st
century NRC carried out an online survey last year on "the current use of
daylight simulations during building design". You can download a copy of the
report from http://irc.nrc-cnrc.gc.ca/ie/light/survey. Key information from
the abstract are:

"... Survey participants worked predominantly on offices and schools...
Tools' complexity and insufficient documentation were identified as
weaknesses of existing programs. Self-training was the most common training
method. Tool usage was significantly higher during design development than
during schematic design. Most survey participants used daylighting software
for parameter studies and presented the results to their clients as a basis
for design decisions. While daylight factor and interior illuminances were
the most common simulation outputs, shading type and control were the most
common design aspects influenced by daylighting analysis... While
participants used a total of 42 different daylight simulation programs, over
50% of program selections were for tools that use RADIANCE... " (In case you
need a more detailed version of the repor, let me know.)

Ian Ashdown says it better than I can, in his (excellent) "Thinking

Photometrically" coursenotes: "As for daylighting calculations, it is likely
that Jongewaard (1993) is correct - the results are only as accurate as the
accuracy of the input data. Done with care, it should be possible to obtain
±20 percent accuracy in the photometric predictions. However, this requires
detailed knowledge and accurate modeling of both the indoor and outdoor
environments. If this cannot be done, it may be advisable to walk softly and
carry a calibrated photometer."

I completely agree with this estimate. It reflects the results from John's
validation study on a façade with a clear glazing and our validation study
for a facade with venetian blinds. Finally, I am doing a validation study
right now for a translucent façade using "trans" and "transdata" (with a lot
of help from maitre Greg) and again I find comparable numbers.

Christoph

Hi Rob, Jack, Martin and Christoph,

thanks very much for your replies. Together with Christoph's survey I have now a better understanding of the usage. I would term this more like a 'relative' use: how do two (or more) design solutions compare to each other. A will give you more light on the desk than B, e.g.

The other emerging topic seems to me that of 'nice' pictures for the clients. Once you have a tool with which you can simulate the building and can produce a picture, there is the danger that clients 'over'-interprete the image ('this is how it's going to look like'). Or put the other way, the image they can see either on a CRT or in a printed version does not reflect truely what the building is probably going to look because of the limited dynamic range of the medium (CRT or paper).

Good question. Radiance itself has been the subject of many validation
studies,

May I challenge this? Validation studies I am aware of are (if anyone knows of more then please tell me):

Grynberg, A.,“Validation of Radiance”, LBID 1575, LBL Technical Information Department, Lawrence Berkeley National Laboratory, Berkeley, California, July 1989.
This technical report proved to be inaccessible for me. If anyone has a copy, I would be more than happy to read it.

Khodulev, A. and E. Kopylov, “Physically accurate lighting simulation in computer graphics software”, 6. International conference on Computer Graphics and Visualization, St. Petersburg, Russia, July 1-5, 1996.
http://www.keldysh.ru/pages/cgraph/articles/pals/
This is a website and describes a white-box scenario, i.e a scenario for which you can actually calculate the solution analytically. But how much is this a valid scenario for complex illumination situations?
I have my own opinion about website-only references.

Houser, K.W., D.K. Tiller, and I.C. Pasini, “Toward the accuracy of lighting simulations in physically based computer graphics software”, Journal of the Illuminating Engineering Society, 28(1), Winter 1999, 117-129.
the conclusion is that RADIANCE does not do what it says on the package (I have my own opinion about this paper)

Ubbelohde, M.S. and Humann, C. “Comparative evaluation of four daylighting software programs”, 1998 ACEEE Summer Study on Energy Efficiency in Buildings Proceedings, American Council for an Energy-Efficient Economy, 1998.
I asked the author to send me a copy of the paper about 18 months ago and I am still waiting.

Mardaljevic, J., “Validation of a lighting simulation program under real sky conditions”, Lighting research and Technology, 27(4), 1995, 181-188.
That is what I call validation: comparison of simulation output and measurements from the real environment.
In my personal opinion there is no validation study apart from John Mardaljevic's work.

Christoph, I saw your 2001 paper with Walkenhorst in the reference section of the survey. Is that a validation?

Rushmeier,Ward,Piatko,Sanders,Rust, 1995,Comparing Real and Synthetic Images: Some Ideas About Metrics, 6th Eurographics Workshop on Rendering.
this paper appears on the Radiance site and may I cite from the paper (p. 3, section 1.2): 'There are clearly very high levels of uncertainty in the measurements made in this experiment.' Now, that paper didn't set out as a validation for Radiance, but tried to do soemthing else, so in a way it's not an appropriate reference for a validation study, but it could have been, if measurements would have been taken more carefully and compared to the real room.

Can you please tell me whose work I'm missing here? Or do we mean different things when we talk about 'validaton'?
I hope I haven't upset anyone, because that's not what I intended to do. I am just in the pursuit of evidence and if it is out there, then please tell me.

c) what magnitude of error is acceptable for your work?

I understand that the 'unpredictability' of the sky is the biggest problem basically.

Thanks again,

Alexa

···

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dr. Alexa I. Ruppertsberg
Department of Optometry
University of Bradford
Bradford
BD7 1DP
UK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A while ago, I compared radiance with agi32 for a very simple room with a PAR lamp on the ceiling. The results are at

www.personal.psu.edu/mum13/comparison_Radiance_AGI32.pdf
and
www.personal.psu.edu/mum13/agi_rad.pdf

The errors for 1 and 2 ambient bounces, as shown in www.personal.psu.edu/mum13/agi_rad.pdf, are somewhat significant and will be worse for complex environments.

It is best to do validations where the results are known a priori. Otherwise, you have too many voltage fluctuations, contractor changes and errors, reflectances are off, locations are off, material BRDFs are not known, etc.

Martin Moeck

···

-----Original Message-----
From: Alexa I. Ruppertsberg [mailto:[email protected]]
Sent: Fri 1/21/2005 11:09 AM
To: Radiance general discussion
Cc:
Subject: Re: [Radiance-general] Evaluation
Hi Rob, Jack, Martin and Christoph,

thanks very much for your replies. Together with Christoph's survey I
have now a better understanding of the usage. I would term this more
like a 'relative' use: how do two (or more) design solutions compare to
each other. A will give you more light on the desk than B, e.g.

The other emerging topic seems to me that of 'nice' pictures for the
clients. Once you have a tool with which you can simulate the building
and can produce a picture, there is the danger that clients
'over'-interprete the image ('this is how it's going to look like'). Or
put the other way, the image they can see either on a CRT or in a
printed version does not reflect truely what the building is probably
going to look because of the limited dynamic range of the medium (CRT or
paper).

Good question. Radiance itself has been the subject of many validation
studies,

May I challenge this? Validation studies I am aware of are (if anyone
knows of more then please tell me):

Grynberg, A.,“Validation of Radiance”, LBID 1575, LBL Technical
Information Department, Lawrence Berkeley National Laboratory, Berkeley,
California, July 1989.
This technical report proved to be inaccessible for me. If anyone has a
copy, I would be more than happy to read it.

Khodulev, A. and E. Kopylov, “Physically accurate lighting simulation in
computer graphics software”, 6. International conference on Computer
Graphics and Visualization, St. Petersburg, Russia, July 1-5, 1996.
http://www.keldysh.ru/pages/cgraph/articles/pals/
This is a website and describes a white-box scenario, i.e a scenario for
which you can actually calculate the solution analytically. But how much
is this a valid scenario for complex illumination situations?
I have my own opinion about website-only references.

Houser, K.W., D.K. Tiller, and I.C. Pasini, “Toward the accuracy of
lighting simulations in physically based computer graphics software”,
Journal of the Illuminating Engineering Society, 28(1), Winter 1999,
117-129.
the conclusion is that RADIANCE does not do what it says on the package
(I have my own opinion about this paper)

Ubbelohde, M.S. and Humann, C. “Comparative evaluation of four
daylighting software programs”, 1998 ACEEE Summer Study on Energy
Efficiency in Buildings Proceedings, American Council for an
Energy-Efficient Economy, 1998.
I asked the author to send me a copy of the paper about 18 months ago
and I am still waiting.

Mardaljevic, J., “Validation of a lighting simulation program under real
sky conditions”, Lighting research and Technology, 27(4), 1995, 181-188.
That is what I call validation: comparison of simulation output and
measurements from the real environment.
In my personal opinion there is no validation study apart from John
Mardaljevic's work.

Christoph, I saw your 2001 paper with Walkenhorst in the reference
section of the survey. Is that a validation?

Rushmeier,Ward,Piatko,Sanders,Rust, 1995,Comparing Real and Synthetic
Images: Some Ideas About Metrics, 6th Eurographics Workshop on Rendering.
this paper appears on the Radiance site and may I cite from the paper
(p. 3, section 1.2): 'There are clearly very high levels of uncertainty
in the measurements made in this experiment.' Now, that paper didn't set
out as a validation for Radiance, but tried to do soemthing else, so in
a way it's not an appropriate reference for a validation study, but it
could have been, if measurements would have been taken more carefully
and compared to the real room.

Can you please tell me whose work I'm missing here? Or do we mean
different things when we talk about 'validaton'?
I hope I haven't upset anyone, because that's not what I intended to do.
  I am just in the pursuit of evidence and if it is out there, then
please tell me.

c) what magnitude of error is acceptable for your work?

I understand that the 'unpredictability' of the sky is the biggest
problem basically.

Thanks again,

Alexa

--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dr. Alexa I. Ruppertsberg
Department of Optometry
University of Bradford
Bradford
BD7 1DP
UK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Alexa I. Ruppertsberg wrote:

I hope I haven't upset anyone, because that's not what I intended to do. I am just in the pursuit of evidence and if it is out there, then please tell me.

No problem, nothing's easier than that :slight_smile:

You've already mentioned some quite interesting keywords: simulation, relativity, interpretation, and, of course, 'nice images'. One could add the term 'abstraction'. I'm sure you've already thought about that, too. Probably there's not one evidence, but many, and each one depends on the situation and the correct mixture of the above terms. (Isn't that concrete?)

I could write more about those nice images, but this is not an artists forum...

One general interesting point about Radiance is that it is often applied in some sort of 'interface' position, means where science/technology meets architecture, ecology, ergonomics, PR & marketing, design and all the rest from every day life. And its important to be aware when a border is crossed, and not to stretch concepts from one world too far into the other. I can understand your concerns that this topic might get lost sometimes, on the consultants side as well as on the clients side. On may apply the well known cynical saying from politics: every client gets the consultant he deserves..

-cb

Hi Alexa,

I don't really feel I can add that much to this discussion, as I think validation of software by its author is also something to view with suspicion, but I agree with you in general that good validation studies are very difficult to find. I just wanted to say something about the following reference.

Grynberg, A.,“Validation of Radiance”, LBID 1575, LBL Technical Information Department, Lawrence Berkeley National Laboratory, Berkeley, California, July 1989.
This technical report proved to be inaccessible for me. If anyone has a copy, I would be more than happy to read it.

I have a copy of this report, which was prepared by Anat Grynberg during her second summer internship at LBNL. However, I do not think it is particularly useful as a numerical validation. It focuses mostly on a "qualitative study" to demonstrate that Radiance can model scenes with a realistic level of detail. Much of the work surrounded the creation of the well-known LBNL conference room model. It also covers the initial design of the imaging gonioreflectometer, but does not contain much in the way of quantitative comparisons. Anat did perform a quantitative comparison based on a previous study of Superlite, comparing it to measurements in a skydome the previous summer, but this work was never published. It is out of date at this point, anyway, as the version of Radiance she was using lacked some of the better techniques it has now for handling large area sources.

However, if you are interested in seeing these early studies, I can dig them up and copy them with some effort. The color images of course will not come out, but you can get a flavor, at least. The more recent studies are much better. These were undertaken mainly because nothing else existed at the time.

-Greg