Illuminating objects with a light probe

Hi there

When trying to illuminate objects with a light probe made from an indoor scene
is it best to generate a cubic cross map in HDRShop and then map each of the 6
sides onto the room walls as 'glow' textures? Because as it is a small scene
with small intense sources, when I use genbox, I never know which is the best
size box to illuminate the objects, i.e. get them in the right position as if
w.r.t. the light source.
Is the best option to use mksource? If I need to map my light probe onto a
distant glow source rather than a large box, what should the geometric
description for that distant glow source be? When creating a mask to combine
probe images in HDRShop, since there is no reflection of a camera/camerman in
the probe (it's a simulation picture rendered with Radiance) what other
artefacts is the mask needed for?

Thanks for any help

Tarik

···

--
Tarik Rahman
PhD student, Institute of Perception, Action and Behaviour
School of Informatics
University of Edinburgh

Hello,

When trying to illuminate objects with a light probe made from an indoor

scene

is it best to generate a cubic cross map in HDRShop and then map each of

the 6

sides onto the room walls as 'glow' textures? Because as it is a small

scene

with small intense sources, when I use genbox, I never know which is the

best

size box to illuminate the objects, i.e. get them in the right position as

if

w.r.t. the light source.

If you are going to move the viewpoint around, the closer the geometry of
the light probes to the scene geometry, the better. In that case, I guess
the box should have the size of the room. However, this depends on where was
the camera (or mirrored ball, or viewpoint) when getting the probes.

Is the best option to use mksource? If I need to map my light probe onto a
distant glow source rather than a large box, what should the geometric
description for that distant glow source be?

mksource can help improve results and speed, specially if you have small
sources.
The description is vey similar, only you map the probe onto a source object
like this:

  void colorpict sourcemap
  7 red green blue input_image.hdr fisheye.cal fish_u fish_v
  0
  0

  sourcemap glow sourceglow
  0
  0
  4 1 1 1 0

  sourceglow source source_1
  0
  0
  4 0 1 0 180

This works for hemispheric light probes and the result will be maped in the
hemisphere +Y.
If you look into fisheye.cal:

  fish_u = .5 + Dx/fish_Rxz * fish_Ry;
  fish_v = .5 + Dz/fish_Rxz * fish_Ry;

  fish_Rxz = sqrt(Dx*Dx + Dz*Dz);
  fish_Ry = acos(Dy) / PI;

the very last "/PI" works for hemispheric probes. If you are using a 360
deg. probe, you should change this for "/(2*PI)", and the last line in the
desciption for "4 0 1 0 360".

When creating a mask
to combine
probe images in HDRShop, since there is no reflection of a
camera/camerman in
the probe (it's a simulation picture rendered with Radiance) what other
artefacts is the mask needed for?

I`m sorry I cannot help with this one, I haven`t used it.
Regards,

Santiago

···

Thanks for any help

Tarik
--
Tarik Rahman
PhD student, Institute of Perception, Action and Behaviour
School of Informatics
University of Edinburgh

_______________________________________________
Radiance-general mailing list
[email protected]
http://www.radiance-online.org/mailman/listinfo/radiance-general