Forum

Side by Side VR from Spherical photos/renders with gaze support

04 April 2018 13:04
Hi!

Kinda can't find the answer to this.. maybe it's obvious.

What I need is to know how to setup my scene in blender (hopefully with the cycles nodes), to load up two spherical photos one for each eye, and roughly how to implement a gaze controller - a circular loader that will perform a click when looking for an interactable object for a set amount of time.

I would also need to be able to add planes for the zenith and nadir, and other objects in the scene that will trigger a loading of a different set of spherical photos for the backround.

Not doing this in the WebPlayer app, but in the Custom Type set up throguh the SDK, so I'm expecting there will be some coding to be done to achieve this?

Is this at all possible at this point?
Thanks for any help :) !
04 April 2018 15:31
What I need is to know how to setup my scene in blender (hopefully with the cycles nodes), to load up two spherical photos one for each eye
Spherical photo for each eye will not make stereoscopic vision, because each photo creates at a fixed point in the space for left and right eye, but it means that you will see stereoscopic image only with one fixed orientation of your VR-device, other orientations will break your brain.
Usually for VR experience one spherical photo is used and scene is filled with some 3d objects for interaction. To add an environment photo you should switch render to Cycles and configure world material. Also currently you can't use .hdr maps, you should convert them to jpeg or png.

how to implement a gaze controller - a circular loader that will perform a click when looking for an interactable object for a set amount of time.
For gaze controller I would use ray casting. I Would cast a ray from some middle point regarding eyes in direction of view. I suppose the circular loader should be placed at the hit point. It can be implemented with animated shader.

Not doing this in the WebPlayer app, but in the Custom Type set up throguh the SDK, so I'm expecting there will be some coding to be done to achieve this?
Unfortunately no, there is no code for this. But this is definitely complete and important case.

Is this at all possible at this point?
This is possible, but some coding is required.

Also see the VR application for reference.
Alexander (Blend4Web Team)
twitter
10 April 2018 17:20
Hi!

Thanks for the response!

Actually, stereo rendering using two panoramas is possible. We generate such panoramas using an insta360 camera, and also with pre rendered video and stills.

Here's an article which I think covers it:
https://developers.google.com/vr/jump/rendering-ods-content.pdf

Which is why I need to show a different image for each eye, either as a top/down setup, or as two different panoramas.
In unity I'd just have two cameras rendering diferent layers, with blend4web I have no idea how set this up.
10 April 2018 18:21
From the rendering-ods-content.pdf:
"Building a physical camera with this ray geometry is a hard problem (see g.co/jump), but fortunately, for
CG content it should be as simple as changing the ray equations in your ray tracer."
This means that you can create such 3d stereoscopic images using in 3d software by rendering 3d scenes with the algorythm listed in this paper. For real world you should have ~16 cameras and additional software (see link above). I don't know whether insta360 able to generate ODS images.
Anyway currently there is no existing solution in Blend4Web (the specific shader is required for rendering such stereoscopic panoramas). But the idea is interesting. Thanks for the information. I think we should investigate this field.
Alexander (Blend4Web Team)
twitter
11 April 2018 16:15
Yeah, we've also used just plain two spheres, one per eye, if each sphere is rendered correctly then the illusion is consistent for each eye. It's either a flip where the left image is replaced with the right image at the back of the sphere (but you loose some 3D in the middle), or a special process where each column of pixels is rendered for a different position of the camera (as far as I understand it).

We do have some simple files with two png's - one per eye, where in our own engine (now sadly not longer in use, and without webGL support) the illusion was largely correct, with just the correctly prepared panoramas and a simple emissive shader.

If I could know how to render two cameras to a left/right split view, each camera rendering a different background I could test it with the images we already have.

The Insta360 camera outputs two stitched up jpg images in a top/down config that can be viewed in various viewers in 360 stereo.
 
Please register or log in to leave a reply.