A subtle VR rendring issue to share with you.
It’s about percetive feeling, not really rocket science.
The issue I have is; It feels like the camera IPD in the default WebXR camera rig is too large.
As a result, the volume of the rendered world “feels” small.
Like you walk in a house and if “feels” small as a scale model.
IPD being the distance between the left and right cameras. It should typically by 64mm and actually adapted to each user’s actual IPD. As it is done with HMD with good (ie accurate) eye tracking. So that the user feels really immerged. It’s not an obvious feeling, and it requires to simulate visual universes one is used to. You won’t feel it if you look at a spaceship.
If the IPD used to render images is too large, it’s like if your head was a giant head in the world you look at. And I think thats the case in our app.
Is there a way to change it in the rendering parameters ?
In other words, is there a way to access and change this metric in the default VR camera rig ?
This information is provided by the underlying system. We are not setting or changing it in any way. It is dynamic in certain systems (the ones that can change the distance between the eyes like the oculus quest), but if you feel like the data is incorrect, there is little to nothing we at babylon can do about it.
Should I understand that the IDP is set on the VR camera rig based on the numbers provided by the hardware runtime ? ie via the asbtactions layers (webxr, openxr).
At some point the engine receive that information and implement it, correct ?
Also, if this is based on the system, it should be different upon HW and OS ? If I try on a Vive, I’ll have a different result ?
Then the question is, how to fix this ?
Reaching out to Meta and explain the issue, get it fixed, might take a while.
It will definitively require to demonstrate the issue, with a fix.
If you pretend there’s an issue, it’s good practice to offer a proof by resolution.
Now, how could we patch this in an easy way?
Quick fix, increase the world size, compensating for the viewer height, until the effect is compesanted.
I’ll try this, but it’s a really hugly fix to tweak the data to change the result.
Better fix, actually change the IPD.
At some point in the code of BJS, you receive that value and implent it in the rendering, correct ?
In the case of a headset with a dial or eyetracking, it can even change during the experience.
It seems (doing some research on my own) that it’s not taken into account
Do you confirm this ?
Is there a way we could “dial” this value up and down, to confirm my hyprothesis.
I mean by changing the core BJS engine code, not in my app.
It might break the WebXR canonial implementation, but that’s for the sake or research, not for a public release.
Would that be possible ?
Note that there are actually two parameters in the camera rig. The IPD and the convergence.
I guess you guys renders parallel camera and do some translation (HIT, Horizontal Image Translation) to compensate for absence of convergence. It might be that HIT that is wrong. I need to spend a bit of time in my headset to confirm. And maybe compare with other VR and WebXR implentations (Unity, ThreeJS and Co).
To explain why I care so much about this, years ago I wrote a textbook on 3D Cinema.
“3D Movie Making”, based on my work at Disney and Dreamworks on 3D animation.
It’s all about 3D camera settings and the feeling it creates for the viewer.
And, just like reality is actually a construct, “presence” is a feeling.
To achieve deeper “presence”, the “visual scale” of the VR world needs to be realistic.
If you are referring to our rigging system - this was added back in the joyfull cardboard days and is not being used right now.
The code running the update is here - Babylon.js/webXRCamera.ts at master · BabylonJS/Babylon.js (github.com). As you see, we update the position and rotation directly from the values provided by WebXR, and then also use the projection matrix provided by them as well. This would be the place to manipulate the values, if you want to.
I am not 100% what the issue is or what you have, but this is the first time anyone mentions that. I never say “we are right, there is nothing wrong with the code”, so I will gladly keep it open and wait for your feedback. If you think something is wrong in the framework itself, I will be happy to investigate and fix it.
Yes, I know, and to be frank I really appreciate your patience at reading and aswering because raising this kind of 3D-quality concerns very often doesn’t get such good treatment.
Let’s get deeper in the subject, I’ll try to give you more details. (Let’s increase the LOD…)
Do you know that feeling when going back to childhood places and thinking the places, furnitures, objects, are “smaller” than you remember thdy were ? Not “lower” as you grown taller, but “smaller”, as if you had grown larger/bigger ? That’s because your head grown and your personal IPD extended, making the world look smaller. Once again, it’s a feeling, not a perception. 3D perception / stereoscopic reconstruction being a high-level feature in our brains, it’s not a perception like color, light, heat. (and the Human Visual System science teach us color vision is a to a large part an self-illusion)
This effect, 3D size, is the effect of the IPD of the rendering camera. We use it extensively in Stereo-3D when making movies. It allows us to cheat on the size of the scenes we fit in the screen. Like fitting downtown New York on a 10 meters screen. Spoiler Alert; it’s not shown at scale, it’s hugely reduced in size. And we do that by spreading the camera rig meters appart.
When a size if fixed by IPD setting, nothing can fix it later. People in 3D tends to think you can fix it when you move the images horizontally one way or another. But doing so, you only change the “distance” of the image, ie you affect how much you have to converge your eyes, how far you set the whole scene. But the “volume” of the scene is fixed, baked in, at the rendering.
Actually a resizing affects it too, as well as a 3D-3D conversion by the way of a clever Z-buffer use as done in reprojection, aka space warp like sir Carmack named it. But these are post processes. “Fix it in post” might be a VFX business model, it’s not movie making craft.
To get to the point, it’s easy to the the IPD good enough for most people to considers it good, as your brain will override the “feeling” with the “knowledge”. You know you are looking at a full size car, therefore you cancel the visual clue it’s a scale model. Especially in VR, where you move around and the motion paralax superseedes the stereoscopic paralax. As 3D expert, we have learned to detect this imperfections. All it takes is to look at a scene a “forget” the context and listen to your “feeling” about the size or the scene. In BJS scenes, I’s say it’s 2/3 of the full size, and this can be only a few mm error in the rig, not much more.
Thanks for your patience reading all this.
Hope I conviced you I have a genuine concern.
Will try to play with the code, but TBH, I might need some help.