A modern comparison of Radiance and other rendering engines

Good morning all,

Apologies if a thread like this already existed, but after searching the
archives I could not find it.

I am looking for a modern-day explanation as to what Radiance offers that other
popular rendering engines (Renderman, Cycles, V-Ray, etc) do not. These all seem
to use ray-tracing, solve the global illumination problem, use physical units
(albeit sometimes not very obviously, and sometimes require a conversion
factor), I see hints of illumination falsecolours out there, and can output
unclipped HDRIs as final results... as you can see, at first glance to an
inexperienced person it really does seem that these other rendering engines are
indeed also physically based and can also give the same scientifically accurate
output that Radiance can (also perhaps not as easily parsable as Radiance
results). (Assuming that the user doesn't purposely use biased rendering
techniques)

There seem to be features such as human eye image calibration, or falsecolours
that are rarer to be found in other engines, but slowly I am seeing mentions of
these features emerging.

I also stumbled upon this comparison website:

https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

In that website, it is not obvious how the material and light properties were
converted for each engine, but in short, the graphical outputs look the same.

I hope that someone on this list can help explain to me in simpler terms what I
am overlooking :slight_smile:

···

--
Dion Moult

Hi,

there are even more renderers, e.g. pbrt/luxrender, mitsuba, ....

I think the main difference is - that the difference is not known. In the Radiance universe, a lot of work is spent on testing the validity of the models and methods. This allows professionals to rely on the software, as long as they are within the boundaries of the validations. There is a lot of other software capable to solve the global illumination, but few people will rely on it for quantitative studies before they have been validated.

Another, really important reason that people stick with Radiance regardless what exists "out there" is that for daylight simulation, a good renderer does not help you without the ecosystem of tools making it a useful simulation environment. So to make use of climate data, perform annual simulations, model the often exotic properties of fenestration, and analyze the results, you need more than the ray-tracer.

Finally, what may appear as an advantage - the quick introduction of new features and state-of-the-art rendering algorithms, can become a serious drawback. The "modular" renderers out there, e.g. Mitsuba, allow to combine different modules. Other, often commcercial renderers, may bring new features with every release (and may not even tell you if something changed). However, if you need to redo all your validations with every combination of such modules, or any change in your implementation, you hardly reach the point where you can make use of the software.

So while there exist lots of codes to trace light, the motivation of the developers usually is not to ensure valid quantitative simulation for building performance analysis. In fact, most software in this field is based on tuning and adapting good-old radiosity.

https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

This is an impressive coverage of rendering engines! It just lacks the numbers. So while images may look similar, we do not know about quantitative agreement.

Cheers,
Lars

Hello,

I agree with Lars in everything, but I also want to add some things:

   1. I believe that, for scientific use of daylight simulations, you need
   extensive numerical validation. Some renders are not focused on that, but
   only on speed and generating "photorealistic images", which are different
   from "photometrically correct images"... Does that make sense? I think this
   is the most important difference between Radiance and other renderers out
   there.
   2. In 2012 I asked this in the PBRT google group, and they advice me to
   keep using Radiance (LINK
   <https://groups.google.com/forum/#!topic/pbrt/xQlfQnPYPB0>)
   3. The irradiance Cache seem to be pretty unique for Radiance.... ?

Best,

Germán

···

2018-01-29 9:23 GMT-03:00 Lars O. Grobe <[email protected]>:

Hi,

there are even more renderers, e.g. pbrt/luxrender, mitsuba, ....

I think the main difference is - that the difference is not known. In the
Radiance universe, a lot of work is spent on testing the validity of the
models and methods. This allows professionals to rely on the software, as
long as they are within the boundaries of the validations. There is a lot
of other software capable to solve the global illumination, but few people
will rely on it for quantitative studies before they have been validated.

Another, really important reason that people stick with Radiance
regardless what exists "out there" is that for daylight simulation, a good
renderer does not help you without the ecosystem of tools making it a
useful simulation environment. So to make use of climate data, perform
annual simulations, model the often exotic properties of fenestration, and
analyze the results, you need more than the ray-tracer.

Finally, what may appear as an advantage - the quick introduction of new
features and state-of-the-art rendering algorithms, can become a serious
drawback. The "modular" renderers out there, e.g. Mitsuba, allow to combine
different modules. Other, often commcercial renderers, may bring new
features with every release (and may not even tell you if something
changed). However, if you need to redo all your validations with every
combination of such modules, or any change in your implementation, you
hardly reach the point where you can make use of the software.

So while there exist lots of codes to trace light, the motivation of the
developers usually is not to ensure valid quantitative simulation for
building performance analysis. In fact, most software in this field is
based on tuning and adapting good-old radiosity.

https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

This is an impressive coverage of rendering engines! It just lacks the
numbers. So while images may look similar, we do not know about
quantitative agreement.

Cheers,
Lars

_______________________________________________
Radiance-general mailing list
[email protected]
https://www.radiance-online.org/mailman/listinfo/radiance-general

It's a great question, and I think Lars and Germán summarized the differences pretty well. The bottom line is that you can take almost any renderer and add the needed features and capabilities to make it physically accurate, but there is little economic motivation to do so. The main, important difference between Radiance and other tools is the dedication of this community to keeping the focus on accuracy every step of the way.

This discussion reminds me of an interesting study performed by Christoph Reinhard and Pierre-Félix Breton, comparing Daysim to Autodesk 3ds Max. They found that 3ds could in fact simulate the simple side-lighting case they were testing, but figuring out how to set the materials correctly and interpreting the output was tricky. (The reference is linked below.)

  C F Reinhart and P-F Breton, “Experimental Validation of Autodesk® 3ds Max® Design 2009 and Daysim3.0”.LEUKOS, 6:1, 2009
  http://www.ibpsa.org/proceedings/BS2009/BS09_1514_1521.pdf

You might read this report and decide, "Hey, I'll go with the big commercial software, since it probably has a smoother workflow," and you could be right. Except, the next release comes out and the features you were relying on are no longer supported, or just don't work as they used to. This has happened many, many times over the past 30 years or so that I've been paying attention.

The most tragic trajectory was the one followed by LightScape, which was initially intended as a head-to-head competitor of Radiance, and was quite good at it, taking more of a radiosity approach with some ray-tracing add-ons. This was eventually bought by Autodesk, and stayed true to its purpose for maybe 5 years before things like photometric files lost support, followed by numeric output, followed by every useful feature for lighting simulation. (I can't say for sure, but I bet the renderings got nicer-looking in the same period.) The point is, although there is a small community of people who are keen on accurate results out of their renderer, the money is in good-looking output.

In contrast, Radiance has been plodding along a lonely but very constrained path over its lifetime, where features are added only if they improve accuracy, or add capabilities without compromising accuracy. There's no money in it, but the reward is that we have at least one tool to which all the others can be compared when you really need to know, "Is this the right result, or does it just look right?"

Cheers,
-Greg

P.S. To Germán - irradiance caching was originally unique to Radiance, but a lot of ray-tracers have picked it up since then.

P.P.S. I should mention that AGi32 is the one commercial tool that has consistently stayed true to the cause of lighting simulation over the years. When we get together, Ian Ashdown and I like to joke about what would happen to lighting software if a meteor were to choose that moment to take both of us out.

···

From: Germán Molina Larrain <[email protected]>
Date: January 29, 2018 5:44:09 AM PST

Hello,

I agree with Lars in everything, but I also want to add some things:
I believe that, for scientific use of daylight simulations, you need extensive numerical validation. Some renders are not focused on that, but only on speed and generating "photorealistic images", which are different from "photometrically correct images"... Does that make sense? I think this is the most important difference between Radiance and other renderers out there.
In 2012 I asked this in the PBRT google group, and they advice me to keep using Radiance (LINK)
The irradiance Cache seem to be pretty unique for Radiance.... ?
Best,

Germán

2018-01-29 9:23 GMT-03:00 Lars O. Grobe <[email protected]>:
Hi,

there are even more renderers, e.g. pbrt/luxrender, mitsuba, ....

I think the main difference is - that the difference is not known. In the Radiance universe, a lot of work is spent on testing the validity of the models and methods. This allows professionals to rely on the software, as long as they are within the boundaries of the validations. There is a lot of other software capable to solve the global illumination, but few people will rely on it for quantitative studies before they have been validated.

Another, really important reason that people stick with Radiance regardless what exists "out there" is that for daylight simulation, a good renderer does not help you without the ecosystem of tools making it a useful simulation environment. So to make use of climate data, perform annual simulations, model the often exotic properties of fenestration, and analyze the results, you need more than the ray-tracer.

Finally, what may appear as an advantage - the quick introduction of new features and state-of-the-art rendering algorithms, can become a serious drawback. The "modular" renderers out there, e.g. Mitsuba, allow to combine different modules. Other, often commcercial renderers, may bring new features with every release (and may not even tell you if something changed). However, if you need to redo all your validations with every combination of such modules, or any change in your implementation, you hardly reach the point where you can make use of the software.

So while there exist lots of codes to trace light, the motivation of the developers usually is not to ensure valid quantitative simulation for building performance analysis. In fact, most software in this field is based on tuning and adapting good-old radiosity.

https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

This is an impressive coverage of rendering engines! It just lacks the numbers. So while images may look similar, we do not know about quantitative agreement.

Cheers,
Lars

Numeric output is indeed a rare feature of commercial renderers, coupled with the UNIX-y way of processing it in pipes by chaining modules. And exactly this modular nature distinguishes RADIANCE, as it is really a suite of independent tools rather than one monolithic chunk of software; the charm/challenge lies in putting it all together for the task at hand. By contrast, modular commercial renderers use plug-ins, which generally cannot be used independently. You can of course argue that such software is better integrated than RADIANCE, and that's a definite plus in some application contexts.

A final point that hasn't been brought up is portability. Commercial software may be portable to some degree but is generally optimised for its target platform -- usually Windows. RADIANCE's code, by contrast, uses lowest-common-denominator functionality which you'll find on even the most rudimentary platform. While this precludes the use of more modern programming paradigms (yeah, I've b*tched about the lack of OOP), it also ensures it will probably run unmodified on even the most basic or arcane UNIX-y thing, should you choose to do so. I used to compile RADIANCE on SunOS, IRIX and HP-UX before I even got my first Linux box, and it'll probably still compile there today, however useful/less that may be. This may be a moot point to those who've never seen the code, but it reinforces Greg's argument for constancy as opposed to radical changes on each release of a commercial package.

My 2¢ worth...

--Roland

···

On Mon, 29 Jan 2018 14:44:09 +0100, Germán Molina Larrain <[email protected]> wrote:

Hello,

I agree with Lars in everything, but I also want to add some things:

   1. I believe that, for scientific use of daylight simulations, you need extensive numerical validation. Some renders are not focused on that,

Hi,

nice that someone still stumbles upon my images and render comparisons :wink:

I also stumbled upon this comparison website:
https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

To compare renderers is a really time consuming task and each renderer
is changing
over time, so one picture rendered today might look different in a 1/2
years time.
And to really be able to compare those renderers with some confidence
you must know
about all their parameters etc. A task which I like to spend my time
on, but still after all
those years doing it, I don't want to publish too many details, but
rather make people
download some files, install renderers and try for themselves.

But here are some thoughts:

- Radiance did have a huge impact on other renderers (and that we have
HDR and OpenEXR).
  That's why I called that comparison somewhat ironically Radiance vs.
YouNameIt. Some people
   got all the fame, but me and some others are old enough to remember
where things came from ...
- Other things which slowely made it into other renderers: Sun & sky
simulations, suddenly with HDR
  you could end up with over- or underexposed images -> exposure
control -> camera controls/settings
  which mimick real cameras etc. In the film world you had LUTs, now
everybody somehow/somewhat
  deals with color correction. All of this makes it hard to end up
with a "similar" image.
- Sure, other renderers are more user friendly, and there is/was some
money to make. I would say
   that integration into a DCC tool is sometimes more important for
the success of a renderer than
   the quality of the resulting images. For my tests I was using
Blender (most of the time):
   https://bitbucket.org/wahn/blender-add-ons (the io_scene_multi
contains Python scripts for
   Arnold, Indigo, Luxrender/PBRT, mental ray, Maxwell, Radiance, and
RenderMan compliant).
   None of this is production ready, I add only things I need
personally to do the next step and render
   the next comparison images ... and Blender has Cycles now, which
changes a lot:
   https://www.cycles-renderer.org/
- I started to collect Blender scenes (and scene descriptions for
various renderers) here:
  https://github.com/wahn/export_multi ... as you may notice some of
those scenes come from
  Radiance. I had to write an importer first, parse Radiance files,
store them in some renderer
  independent format which does not loose the provided information,
bring that into Blender and find
  tricks to e.g. keep primitives like spheres as those in case the
renderer in question can handle those
  directly. Use someheuristics to mimic material settings etc. ...
- After a while the git repository got too big, the files too large,
the scenes/textures too big/many ...
  So now I provide download links for scenes in various formats:
https://www.janwalter.org/download/
  All of them originate from a Blender scene, but I update them from
time to time and all of this is meant
  to make it easier for somebody else to just download the files for a
particular renderer without having
  to deal with Blender etc.
- Finally I started converting PBRT's C++ code to Rust (
https://www.rust-lang.org/en-US/ ) as a
  learning experience. Maybe someone wants to do that for Radiance one
day. Rust seems to be a fun language
  with safe multi-threading etc. Here is the current state (and a bit
of history about last year's development):
  https://www.janwalter.org/jekyll/review/2017/2018/01/01/happy-new-year-2018.html

Anyway, comparing renderers is really hard. Radiance is about being
accurate, others might look good and might be easier to play with. My
interest is in using Radiance (and others) for reference and try to
create similar looking images. Compare rendering times etc. An ever
changing world ... Here I was playing with false colors etc.:
https://www.janwalter.org/jekyll/rendering/radiance/2015/10/02/classroom-radiance_falsecolor.html
But I'm not an expert user of Radiance. Most of the people on this
list know much more about it.

I just wanted to give Radiance some credit for all the things which
made it into other renderers.

My 2 cents,

Jan

···

On Mon, Jan 29, 2018 at 8:52 AM, Dion Moult <[email protected]> wrote:

Good morning all,

Apologies if a thread like this already existed, but after searching the
archives I could not find it.

I am looking for a modern-day explanation as to what Radiance offers that other
popular rendering engines (Renderman, Cycles, V-Ray, etc) do not. These all seem
to use ray-tracing, solve the global illumination problem, use physical units
(albeit sometimes not very obviously, and sometimes require a conversion
factor), I see hints of illumination falsecolours out there, and can output
unclipped HDRIs as final results... as you can see, at first glance to an
inexperienced person it really does seem that these other rendering engines are
indeed also physically based and can also give the same scientifically accurate
output that Radiance can (also perhaps not as easily parsable as Radiance
results). (Assuming that the user doesn't purposely use biased rendering
techniques)

There seem to be features such as human eye image calibration, or falsecolours
that are rarer to be found in other engines, but slowly I am seeing mentions of
these features emerging.

I also stumbled upon this comparison website:

https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

In that website, it is not obvious how the material and light properties were
converted for each engine, but in short, the graphical outputs look the same.

I hope that someone on this list can help explain to me in simpler terms what I
am overlooking :slight_smile:

--
Dion Moult

_______________________________________________
Radiance-general mailing list
[email protected]
https://www.radiance-online.org/mailman/listinfo/radiance-general

Hi,

nice that someone still stumbles upon my images and render comparisons :wink:

Thanks for this, Jan (and thanks to Dion for starting this thread)!

This question comes up regularly. Since I work with energy modelers a lot,
I'm often asked "why don't you 'just' use (renderer_x) instead of Radiance,
everyone in architecture uses Revit, everyone uses GPUs and Revit and (all
the Rays) to make these awesome images, Radiance is all blotchy and takes
forever, and besides, my energy model runs in five minutes for an annual
simulation with 15-minite timesteps, blah blah blah..."

It's so refreshing to see someone who gets it. The validation issue is
huge, and we all owe a debt of gratitude to John Mardaljevec for the
initial validation of Radiance, but the whole small community of dedicated
users of Radiance also get to take a bow for rolling up their sleeves and
learning how Radiance and GI work. And the reason I rolled up my sleeves is
because I saw Radiance as one of a fery few tools out there that offered
realistic images but also the photometric truth. There is no money in this,
unfortunately, and it speaks to a larger issue in BOTH the architecture AND
the illumination engineering industries, but I digress.

Greg's account of the demise of Lightscape is even more generous that what
actually happened. When Autodesk bought Lightscape from Discreet Logic (the
original purchasers of Lightscape from the founders), it was immediately
absorbed into their "3DStudio Viz" product, aimed at architects wanting to
make nice pictures while being able to say their images are "correct",
"accurate", etc. Well, the first beta of Viz included Lightscape's
renderer, but the render-as-falsecolor option was unceremoniously removed.
This was the first thing I noticed, since I actually used Lightscape as an
illumination engineering tool. I (and others on the beta test team) brought
this up, and the falsecolor option was added by the next release candidate.
This one provided falsecolor images, but _with no scale legend_. Useless.
For me, the writing was on the wall, even before version 1.0 of 3DS Viz
with "Lightscape Inside" ever saw the light of day. I think Lightscape's
awesome radiosity renderer lasted one year in the Autodesk Death Star.

In the US, Agi32 soldiers on as an excellent, primarily radiosity-based
lighting simulation tool. But it definitely lacks the portability and
modularity that Roland alluded to, and while validated, it also is closed
source. But it's far better than the soup of renderers that Jan has
compared for us, if you're interested in, let's call it, photometric
verisimilitude.

Jan, your efforts are awesome, and I thank you for them. I'm not sure how
much Carsten Bauer follows this list any more, but he did port Radiance to
C++ (in an attempt to better learn C++), in his "Radzilla" project. Similar
to your PBRT-to-Rust effort.

The last thing I wanted to mention in all of this is that while Radiance IS
validated, all that demonstrated is that Greg's raytracing implementation
CAN work, and give accurate, credible results. But every model is an
opportunity for the simulationist to screw that all up with garbage for
input, or garbage for simulation parameters. And again, it's the
simulationist who questions the results who is drawn to Radiance in the
first place. That's why this community is so amazing.

- Rob

···

On Tue, Jan 30, 2018 at 2:56 AM, Jan Walter <[email protected]> wrote:

> I also stumbled upon this comparison website:
> https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html

To compare renderers is a really time consuming task and each renderer
is changing
over time, so one picture rendered today might look different in a 1/2
years time.
And to really be able to compare those renderers with some confidence
you must know
about all their parameters etc. A task which I like to spend my time
on, but still after all
those years doing it, I don't want to publish too many details, but
rather make people
download some files, install renderers and try for themselves.

But here are some thoughts:

- Radiance did have a huge impact on other renderers (and that we have
HDR and OpenEXR).
  That's why I called that comparison somewhat ironically Radiance vs.
YouNameIt. Some people
   got all the fame, but me and some others are old enough to remember
where things came from ...
- Other things which slowely made it into other renderers: Sun & sky
simulations, suddenly with HDR
  you could end up with over- or underexposed images -> exposure
control -> camera controls/settings
  which mimick real cameras etc. In the film world you had LUTs, now
everybody somehow/somewhat
  deals with color correction. All of this makes it hard to end up
with a "similar" image.
- Sure, other renderers are more user friendly, and there is/was some
money to make. I would say
   that integration into a DCC tool is sometimes more important for
the success of a renderer than
   the quality of the resulting images. For my tests I was using
Blender (most of the time):
   https://bitbucket.org/wahn/blender-add-ons (the io_scene_multi
contains Python scripts for
   Arnold, Indigo, Luxrender/PBRT, mental ray, Maxwell, Radiance, and
RenderMan compliant).
   None of this is production ready, I add only things I need
personally to do the next step and render
   the next comparison images ... and Blender has Cycles now, which
changes a lot:
   https://www.cycles-renderer.org/
- I started to collect Blender scenes (and scene descriptions for
various renderers) here:
  https://github.com/wahn/export_multi ... as you may notice some of
those scenes come from
  Radiance. I had to write an importer first, parse Radiance files,
store them in some renderer
  independent format which does not loose the provided information,
bring that into Blender and find
  tricks to e.g. keep primitives like spheres as those in case the
renderer in question can handle those
  directly. Use someheuristics to mimic material settings etc. ...
- After a while the git repository got too big, the files too large,
the scenes/textures too big/many ...
  So now I provide download links for scenes in various formats:
https://www.janwalter.org/download/
  All of them originate from a Blender scene, but I update them from
time to time and all of this is meant
  to make it easier for somebody else to just download the files for a
particular renderer without having
  to deal with Blender etc.
- Finally I started converting PBRT's C++ code to Rust (
https://www.rust-lang.org/en-US/ ) as a
  learning experience. Maybe someone wants to do that for Radiance one
day. Rust seems to be a fun language
  with safe multi-threading etc. Here is the current state (and a bit
of history about last year's development):
  https://www.janwalter.org/jekyll/review/2017/2018/01/01/
happy-new-year-2018.html

Anyway, comparing renderers is really hard. Radiance is about being
accurate, others might look good and might be easier to play with. My
interest is in using Radiance (and others) for reference and try to
create similar looking images. Compare rendering times etc. An ever
changing world ... Here I was playing with false colors etc.:
https://www.janwalter.org/jekyll/rendering/radiance/
2015/10/02/classroom-radiance_falsecolor.html
But I'm not an expert user of Radiance. Most of the people on this
list know much more about it.

I just wanted to give Radiance some credit for all the things which
made it into other renderers.

My 2 cents,

Jan

On Mon, Jan 29, 2018 at 8:52 AM, Dion Moult <[email protected]> wrote:
> Good morning all,
>
> Apologies if a thread like this already existed, but after searching the
> archives I could not find it.
>
> I am looking for a modern-day explanation as to what Radiance offers
that other
> popular rendering engines (Renderman, Cycles, V-Ray, etc) do not. These
all seem
> to use ray-tracing, solve the global illumination problem, use physical
units
> (albeit sometimes not very obviously, and sometimes require a conversion
> factor), I see hints of illumination falsecolours out there, and can
output
> unclipped HDRIs as final results... as you can see, at first glance to an
> inexperienced person it really does seem that these other rendering
engines are
> indeed also physically based and can also give the same scientifically
accurate
> output that Radiance can (also perhaps not as easily parsable as Radiance
> results). (Assuming that the user doesn't purposely use biased rendering
> techniques)
>
> There seem to be features such as human eye image calibration, or
falsecolours
> that are rarer to be found in other engines, but slowly I am seeing
mentions of
> these features emerging.
>
> I also stumbled upon this comparison website:
>
> https://www.janwalter.org/RadianceVsYouNameIt/radiance_vs_younameit.html
>
> In that website, it is not obvious how the material and light properties
were
> converted for each engine, but in short, the graphical outputs look the
same.
>
> I hope that someone on this list can help explain to me in simpler terms
what I
> am overlooking :slight_smile:
>
> --
> Dion Moult
>
> _______________________________________________
> Radiance-general mailing list
> [email protected]
> https://www.radiance-online.org/mailman/listinfo/radiance-general
>

_______________________________________________
Radiance-general mailing list
[email protected]
https://www.radiance-online.org/mailman/listinfo/radiance-general