camera.upVector = new BABYLON.Vector3(0.0,0.0,1.0);
after the line where the camera is declared giving…
var camera = new BABYLON.ArcRotateCamera(“Camera”, -2, 1, 85, new BABYLON.Vector3.Zero(), scene);
camera.upVector = new BABYLON.Vector3(0.0,0.0,1.0);
Then the results are strange, the objects do actually appear in the split screen stereoscopic display but you have to hunt around for them a bit by moving the camera view around a bit, when you find them they are not at the same height on the screen and when I use my mouse (laptop) to rotate the camera around a bit then its all a bit erratic.
My project uses the regular arcRotateCamera with direction up set to (0,0,1) because of the orientation of the data set I import which is based on medical MIR scans and conventions used there. I have hundreds of hours of development time on this project and there have been no issues whatsover using arcRotateCamera like this - works perfectly.
Today I decided for fun to start playing around with Steroscopic views using the rig method pretty much the same as the above playground example with the single addition of my preferred camera up direction.
The issues seem to be two fold.
Left and right views are not as expected, objects at different vertical positions on screen.
Using mouse to move rotate camera has odd effects, objects do not stay centered in their respective halves of the screen.
I thought I would flag it - its not crucial for me personally, is arcRotateCamera with rigging making assumptions about the camera updirection?
Sorry to add a late observation, I went on to change my code for a temporary experiment to use the standard Babylon camera up direction so that I could play around even though as a consequence my models were not orientated as per usual.
In split screen “side by side” mode with the rig option it seems in most cases that the model is displayed with an aspect ratio which makes objects look tall and thin - infact as far as I can tell about half the expected width.
I wondered therefore in rig mode whether an adjustment is missing, one to reflect that each side by side image only has half the screen or viewport width but the usual height, so that whilst my canvas is for example perhaps 1000 x 500 - the renderer for left and right image needs to be informed that there is an ‘effective canvas width’ of 500?
I have only tested the side by side options
RIG_MODE_STEREOSCOPIC_SIDEBYSIDE_CROSSEYED and
RIG_MODE_STEREOSCOPIC_SIDEBYSIDE_PARALLEL
but for these two at least the object is displayed at normal height but only half width.
My apologies for not spotting this earlier - indeed when I return to the playground example…
I see when I inspect the code that the 'egg’is actually a sphere which also seems to confirm the horizontal compression, also rotating the group of objects from ‘above’ or ‘below’ shows how the sides of the cube compress and expand as their relative angle to the screen horizontal direction changes.
Apologies - I wish I had spotted this and flagged it at the same time.
the Stereoscopic cameras are still not rendering correct because they are using the toe-in method rather than an Off-Axis projection. This introduces so called vertical disparities which are quite problematic as they are hurting the eyes and scenes becoming difficult and often impossible to watch:
This stereo image is impossible to fuse (the brain being able to construct a proper 3D view from the left and right image).
All belonging L/R pixels (e.g. top corner of the cube) of the left (red) and right (cyan) cam should only be displaced on the x-axis. Currently they are also displaced in the y-axis. This is most probably derived from the L/R cams being target cameras, thus behaving as a toe-in camera rig, which produces these vertical disparities.
This is how it should look like (achieved by a custom rig with non target cams parented to a universal cam and then shifted in AfterEffects to achieve a 3D focal plane):
All output modes (anaglyph, interlaced, over/under, SbS crosseyed and parallel) produce the same vertical disparities, here the parallel output overlaid in photoshop.
The correct method to eliminate vertical disparities is called Off-Axis projection. It can be achieved via camera matrix manipulation (shifting the virtual sensor on its x-axis) or viewport cropping. I would be very happy to help with the mathematics / logic and with testing (but not with code, I’m not a programmer).
The second issue (for 3D display users, not for Headset users) is that the stereoscopic cameras do not feature a focal plane (zero parallax plane). This leads to the effect that all geometry is placed in front of the screen/TV, thus the in-screen area isn’t used at all. This is also very difficult to watch. The off-axis projection solves this as well.
The third issue is that 3D display users and 3DTV users which require a SideBySide output, most of the time require a SbS half (squeezed) output, shrinking each cam image to 50% on the x-axis: