Rendering for off-centre presentation in a VR lab

Hi George,

This seems a very interesting project! Does this include also trying to simulate the high dynamic range found in real life?

For your problem I guess that the options -vs and -vl might help you.
From the rpict manpage:
  
-vs
Set the view shift to val. This is the amount the actual image will be shifted to the right of the specified view. This is option is useful for generating skewed perspectives or rendering an image a piece at a time. A value of 1 means that the rendered image starts just to the right of the normal view. A value of −1 would be to the left. Larger or fractional values are permitted as well.

-vl
Set the view lift to val. This is the amount the actual image will be lifted up from the specified view, similar to the −vs option.

Hope this helps,

Giovanni

···

-----Original Message-----
From: P George Lovell [mailto:[email protected]]
Sent: 29 January 2013 09:36
To: Radiance general discussion
Subject: [Radiance-general] Rendering for off-centre presentation in a VR lab

Hi Everyone,

I'm attempting to use Radiance to generate images for presentation in a large VR space - the system currently uses a game-engine but I'm not happy with the quality of the rendering as I'm interested in the visual perception of shadows and shading.

The space* is approximately a 6x6x2metre volume with a stereo
(polarized) screen at one end, the screen is roughly 6x2 metres.

I want to present a rendered scene as if it lies behind the screen, i.e.
the screen is a window through which we view a rendered object. The
viewer can then walk through the space and I can present updated views
relative to the current viewing location (I have 6 DOF head tracking).
Obviously this requires a lot of offline rendering and a relatively
narrow movement range - just to cut the rendering overhead.

It's easy enough to see how I might render images for presentation when
viewer and the viewing target are positioned centrally within the world,
i.e. looking straight forward towards the middle of the screen. What I
don't understand is how I render scenes for when the viewer has move
off-centre to left or right. Firstly, perspective is going to make one
side of the screen smaller than the other, I'd need to correct this so
that the screen image fits on the actual screen.

I think I could build-in some markers that denote the corners of the
large VR screen, then stretch the image so that these markers lie on the
corners of the screen - this seems a little clumsy.

Is there a better way?

George

*<http://www.abertay.ac.uk/about/news/pickoftheweek/2010/name,5454,en.html>

--
Dr P. George Lovell,

Lecturer in Psychology
University of Abertay Dundee
Dundee
DD1 1HG

Tel 01382-308581
Fax 01382-308749

Researcher/Co-investigator,
School of Psychology, University of St Andrews

RM 2.01 (The Tower Room).
Phone (01334) 462085

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general

Kudos to Giovanni for providing the correct answer to your perspective problem.

Do check out the rholo program, which was designed for interactive rendering with unconstrained views. The quality may not be up to your standards, but it does handle specular reflections and everything else in Radiance.

Best,
-Greg

···

From: "Giovanni Betti" <[email protected]>
Date: January 29, 2013 1:59:24 AM PST

Hi George,

This seems a very interesting project! Does this include also trying to simulate the high dynamic range found in real life?

For your problem I guess that the options -vs and -vl might help you.
From the rpict manpage:
  
-vs
Set the view shift to val. This is the amount the actual image will be shifted to the right of the specified view. This is option is useful for generating skewed perspectives or rendering an image a piece at a time. A value of 1 means that the rendered image starts just to the right of the normal view. A value of −1 would be to the left. Larger or fractional values are permitted as well.

-vl
Set the view lift to val. This is the amount the actual image will be lifted up from the specified view, similar to the −vs option.

Hope this helps,

Giovanni

-----Original Message-----
From: P George Lovell [mailto:[email protected]]
Sent: 29 January 2013 09:36

Hi Everyone,

I'm attempting to use Radiance to generate images for presentation in a large VR space - the system currently uses a game-engine but I'm not happy with the quality of the rendering as I'm interested in the visual perception of shadows and shading.

The space* is approximately a 6x6x2metre volume with a stereo
(polarized) screen at one end, the screen is roughly 6x2 metres.

I want to present a rendered scene as if it lies behind the screen, i.e.
the screen is a window through which we view a rendered object. The
viewer can then walk through the space and I can present updated views
relative to the current viewing location (I have 6 DOF head tracking).
Obviously this requires a lot of offline rendering and a relatively
narrow movement range - just to cut the rendering overhead.

It's easy enough to see how I might render images for presentation when
viewer and the viewing target are positioned centrally within the world,
i.e. looking straight forward towards the middle of the screen. What I
don't understand is how I render scenes for when the viewer has move
off-centre to left or right. Firstly, perspective is going to make one
side of the screen smaller than the other, I'd need to correct this so
that the screen image fits on the actual screen.

I think I could build-in some markers that denote the corners of the
large VR screen, then stretch the image so that these markers lie on the
corners of the screen - this seems a little clumsy.

Is there a better way?

George

*<http://www.abertay.ac.uk/about/news/pickoftheweek/2010/name,5454,en.html&gt;

--
Dr P. George Lovell,

Lecturer in Psychology
University of Abertay Dundee
Dundee
DD1 1HG

Tel 01382-308581
Fax 01382-308749

Researcher/Co-investigator,
School of Psychology, University of St Andrews

RM 2.01 (The Tower Room).
Phone (01334) 462085

George

Sounds like a nice project! Please note that depending on the number and
complexity of objects in your scene the rendering times for Radiance may
not be suitable for real time viewing. If you hit this problem you can
pre-render textures for your objects in Radiance that capture the lighting
situation. You then just have to map the rendered textures on your object
and are free to use your game engine to provide the perspective image
necessary for your projection.

I don't have URLs at hand but there were some presentations on this in past
Radiance workshops.

Regards,
Thomas

···

On Tue, Jan 29, 2013 at 4:36 AM, P George Lovell < [email protected]> wrote:

Hi Everyone,

I'm attempting to use Radiance to generate images for presentation in a
large VR space - the system currently uses a game-engine but I'm not happy
with the quality of the rendering as I'm interested in the visual
perception of shadows and shading.

The space* is approximately a 6x6x2metre volume with a stereo (polarized)
screen at one end, the screen is roughly 6x2 metres.

I want to present a rendered scene as if it lies behind the screen, i.e.
the screen is a window through which we view a rendered object. The viewer
can then walk through the space and I can present updated views relative to
the current viewing location (I have 6 DOF head tracking). Obviously this
requires a lot of offline rendering and a relatively narrow movement range
- just to cut the rendering overhead.

It's easy enough to see how I might render images for presentation when
viewer and the viewing target are positioned centrally within the world,
i.e. looking straight forward towards the middle of the screen. What I
don't understand is how I render scenes for when the viewer has move
off-centre to left or right. Firstly, perspective is going to make one side
of the screen smaller than the other, I'd need to correct this so that the
screen image fits on the actual screen.

I think I could build-in some markers that denote the corners of the large
VR screen, then stretch the image so that these markers lie on the corners
of the screen - this seems a little clumsy.

Is there a better way?

George

*<http://www.abertay.ac.uk/**about/news/pickoftheweek/2010/**
name,5454,en.html<http://www.abertay.ac.uk/about/news/pickoftheweek/2010/name,5454,en.html&gt;
>

--
Dr P. George Lovell,

Lecturer in Psychology
University of Abertay Dundee
Dundee
DD1 1HG

Tel 01382-308581
Fax 01382-308749

Researcher/Co-investigator,
School of Psychology, University of St Andrews

RM 2.01 (The Tower Room).
Phone (01334) 462085

______________________________**_________________
Radiance-general mailing list
Radiance-general@radiance-**online.org<[email protected]>
http://www.radiance-online.**org/mailman/listinfo/radiance-**general<http://www.radiance-online.org/mailman/listinfo/radiance-general&gt;