Can Radiance render out a 360 equirectangular panoramic?


I did a bit of searching, and found that in 2016 some Radiance patches had been written for some native support in rendering a 360 equirectangular panoramic, but that was it. I also found this thread with some clever tricks.

Just wanted to ask if there was any development on this, and if it is now able to to render a rectangular panoramic with -vv 360 and -vh 360. It is possible to render out an angular fisheye and then use a tool like imagemagick to remap the distortion, but there are cons in remapping a distortion as detail is lost.

Maybe there is another approach is to render out a cubemap and then use that, but that means I’ll need to render out 6 images instead of one.

Any ideas?



I’ve done no further work on the cal-file method for rendering 360 views,
but I suspect someone on this list has a system for generating the 6
cube-map views. I’m interested, as well.



There are a couple of ways to do this, depending on how you want to arrange the output. Basically, you need 6 -vtv views with -vh 90 -vv 90, and the same origin for all. Your -vd and -vu vectors will change between views to get the right arrangement. For a 3x2 “baseball cover” image, these vectors might be:
-vu 0 -1 0 -vd -1 0 0
-vu 0 -1 0 -vd 0 0 1
-vu 0 -1 0 -vd 1 0 0
-vu -1 0 0 -vd 0 -1 0
-vu -1 0 0 -vd 0 0 -1
-vu -1 0 0 -vd 0 1 0

Traditionally, the last three views are also “flipped” vertically so the seams make more sense. You can give each of these views to a separate rpict command, or use rpict with the -S 1 option to render them all to separate output pictures. You can then combine the images using pcompos and pflip like so:

pcompos -a -3 frame1.hdr frame2.hdr frame3.hdr "\!pflip -v frame4.hdr" "\!pflip -v frame5.hdr" "\!pflip -v frame6.hdr" > baseball_cover.hdr


Hi Dion,

I implemented equirectangular omni-directional stereo projection in my fork to use with head-mounted VR displays, if that’s what you’re interested in. It’s basically the same routine as Mark’s cal file, but implemented as an option to use in rvu and other Radiance programs.


P.S. I’d also point out these threads:


Thanks very much for all the help :slight_smile: I ended up going with the 6 views for the cubemap approach. It was relatively simple to do. It also works well for VR displays (I will post an example when these renders finish), and does not suffer to potential loss in resolution around the poles that an equirectangular render has.

That fork looks cool, but for now I want to stick to vanilla Radiance, but good to know it exists!

I am coming across an issue in my rad file where I have:

view=down -vh 90 -vv 90 -vp 0.8087165355682373 -1.2962751388549805 0.4604237973690033 -vd 0.0 0.0 -1.0 -vu 0.0 1.0 0.0
rvu=-av 0 0 0 -ds .01 -dj .8 -dt 0
render=-av 0 0 0 -ds .01 -dj .8 -dt 0 -aE exclude.txt

However when I do render out with rad I get:

rpiece: warning - resolution changed from 1024x1024 to 1020x1020

Any explanations as to why the resolution changed?


When you are rendering a single view with -N > 1, the rad command starts multiple rpiece processes to do the actual work. The rpiece program subdivides an image into tiles, and depending on the number of tiles (set automatically by rad based on the number of processes), it often ends up rendering an original image of a different size. When it does, it issues a warning just to alert you to this condition. The subsequent pfilt command should reduce the final result to the specified resolution, so it normally does not matter.


Hi Dion, hi all,

We’ve had good results with Mark’s (thank you, Mark!) and mapping the result to one mesh sphere for each eye. The distortion is there, as you can see in the images below, but it’s not really noticeable in VR.

One issue we found with the cubemap projection is the tone-mapping, as you want all images to be tone-mapped in exactly the same way to avoid seeing the edges of the cube faces (which would make you see a cube rather than a seamless 360 degree environment). One solution is to tone-map the whole scene by merging the 6 faces into one scene by using pcompos as Greg showed above. This means that some seams will not be “touching” as they should and might lead to slight changes in the dynamic range mapping depending on how they are arranged, if you are using a local TMO. If they are arranged in a cross (which I think would maximize the number of ‘correct’ edges), we have the issue of the black background which is taken into account in the tone-mapping and would lead to a more washed out result.

Another –cruder!- option we have used before is to repeat the exact same procedure in all pictures, for example by changing the exposure with pflit and one pass (pfilt -1 [options] [file]).

I’m very interested in other solutions if someone has something better to suggest! :slight_smile:


There is a trick I often use for animated sequences and the like to get the same exposure for all images using the “phisto” script. The idea is to generate a collective histogram of all the images and pass that to pcond like so:
phisto *.hdr > combined.hist
pcond -I < combined.hist img1.hdr > tm1.hdr
pcond -I < combined.hist img2.hdr > tm2.hdr

You can add whatever other options you like to each other pcond, just keep them consistent. This should result in the same tone-mapping operation each time.

Work in progress - living room

I got past the 6-face tone mapping issue by using the raw file, and then using pfilt -1 -e X raw_foo.hdr > foo.hdr. I then used imagemagick to do mogrify foo.hdr foo.jpg to use in the VR program. However, I’ve just tested it using Greg’s trick of using a combined histogram and it works great, much better!

I found that I needed to use the raw file, because otherwise the pfilt that rad runs will change the values. This also means that I can’t rely on rad to automatically resize the image based on my QUALITY setting. I guess this means if I want a higher resolution and more antialiasing I need to set the options in rad carefully to give me an output raw file with the resolution that I need, and then resize it later with pfilt -1 -x /3 -y /3 ... for example for a high quality output.

You can view the thread here which has a link to the WebVR online viewer and my test render.

However I would like to try out using the 360 stereo cal file. Unfortunately, I don’t understand how to use it directly in rad. I’d like to use rad because it keeps the settings simple and I can easily use many CPU cores. Any pointers would be greatly appreciated!


Passing the images through pfilt before running phisto and pcond shouldn’t really change the result (other than whatever resizing pfilt does). I am puzzled by this.

The rad program plays nicely with rtrace for setting up options and the like, and rtrace itself supports multi-processing. Try:

rad -v 0 input.rif OPT=saved.opt
(ray-generating rcalc command) | rtrace -n NCORES -ffc -x XRES -y YRES @saved.opt octree > vr_result.hdr

The -ffc option only works if you use the -of option of rcalc, which is more efficient that the default ASCII output. Otherwise, use “-fac”.


Thanks Greg! I’m sure that I did something silly when it went through pfilt. I haven’t had time to retest, but I’m sure it’s a non-issue.

Thanks for the tip on running it via a saved.opt. Everything works successfully! I have updated the link at and it now shows two: one for the cube and one for the equirectangular. I did a very fast low quality render just as a test and it works great!

I have also modified to fixed the horizontal flip issue by changing px = $2 to px = XD - $2;. I also created another version of the file called in case you only want to create a single image, instead of an over-under stereoscopic pair. Here it is:


  Definitions for full 360 equirectangular projection

  Originally based on (c)2014 Mark J. Stock
  Modified 2018 Dion Moult for a mono instead of a stereo image

  Use it like this:
  X=2048; Y=1024; cnt $Y $X | rcalc -f -e "XD=$X;YD=$Y;X=0;Y=0;Z=0" | rtrace [rpict options] -x $X -y $Y -fac scene.oct > out.hdr

  Parameters defined externally:
  X : -vp X coordinate
  Y : -vp Y coordinate
  Z : -vp Z coordinate
  XD : horizontal picture dimension ( pixels )
  YD : vertical picture dimension ( pixels )

{ Direction of the current pixel (both angles in radians) }
px = XD - $2;
py = YD - $1;
frac(x) : x - floor(x);
altitude = (frac((py-0.5)/(YD)) - 0.5) * PI;
{ to do over-under stereo, azimuth is easy }
azimut = px * 2 * PI / XD;

{ Transformation into a direction vector }
xdir = cos(azimut) * cos(altitude);
ydir = sin(azimut) * cos(altitude);
zdir = sin(altitude);

{ Output line to rtrace; each ray needs: xorg yorg zorg xdir ydir zdir }
$1 = X; $2 = Y; $3 = Z;
$4 = xdir; $5 = ydir; $6 = zdir;

{ EOF }

I have written up an article about how to use Radiance to produce the three types of 360 panoramas: sphere maps, cube maps, and equirectangular projection, based on what I have learned in this thread and thanks to all your help! I hope this is useful for somebody:


Greg, using phisto is an excellent trick for the tone-mapping. Thank you!

Glad to see you made it work Dion, the WebVR scenes look great! I am looking forward to seeing your final rendering in VR. It’s a pity that the stereoscopy wouldn’t be correct in the simple cubemap (if you don’t look straight), seeing them side by side really shows the distortions at the poles in the equirectangular projection.

Thank you for writing these guidelines in how to use Radiance to make 360 degree panoramas for VR, I am sure they will be very useful!