Human eye simulation in rendered images

Hello, I would like to render images simulating the human eye response.This is particularly important as I would like to visually assess a scarcely lit ambient. I know that a 43 lens would roughly render the visual attention of 55 degrees of our eyes so I would adopt for a 43 mm lens VH= 45.4° and VV=31.2°.
I would not use any change of exposure, but only rely on pcond.
Would this be correct in your opinion?

Thank you for your help.

Did this get moved to another thread, or did your question go unanswered?

I don’t know about the field of view restriction – I haven’t heard of anyone doing that before. The pcond -h option is designed to simulate low-light conditions, and should work so long as your rendering was done accurately using measured light sources and so forth.

Hi, thank you for your reply. No it was not moved to another thread.
What I was looking for was the right combination of hv and vv to approximate the human eye. Calculations have been done pointing at a 17 or 24 mm lens. But considering the peripheral vision the eye would be better simulated through a 43 lens. The relative angles have been calculated through very common online calculators.
As for the pcond I am completely fine with dark images as far as they approximate the reality and I would like as much as possible avoid exposure as this woukd falsify the results in some sense. I hope this makes sense.

Pcond is designed to create an image on the monitor that reproduces what the eye sees under there simulated conditions. There is no “standard dimensions” for the eye’s field of view, which is nearly 180° if you include peripheral vision. The recommendations of lens focal lengths is specific the camera sensor size and the anglular size of the image when viewed naturally in a photo or on a monitor (which may not be the same).

Thank you Greg. Yes our angle of view is around 180 degrees and ca 130 degrees up and down. That would look like a fisheye ‘without distortions’. If one does not account for the peripheral vision a 43 mm lens on a 35 mm sensor (the retina is 32 mm) might roughly work they say. I assume that generating a pair of stereo images would be a better solution (including somehow the peripheral area)? Would ‘vwright’ do the trick? But one should then modify the angles hv and vv to reach something closer to 180/130 degrees, summing up the contribution of the two pictures. Has someone here worked with it?

The only way to get where you want is with a head-mounted display. There are several on the market, but it’s more a question of interactive rendering/reprojection at that point. There are folks who have played with the simple smartphone-based optics for 360° viewing. For example, see Andy McNeil’s talk from the 2016 Radiance workshop.

Thank you Greg! I downloaded this .cal from here 360 degree rendering I assume is the code presented in the slides. I will try it!