My question is if there is any chance to make shaders dependent on each viewpoint (that is different for each eye). Note that my shaders are already reading the position but both eyes are reading the position of the camera itself instead of using a different position for each point of view.
Thank you in advance.
The camera transformation is updated every render call, which is being called for each camera individually.
Babylon takes care of that for you.
Do you mean if you can run a different shader altogether on each eye, or if those values are updated for each camera?
Yes, I mean using the origin of the ray from each point of view instead a global one (the camera object)
This is updated for you. Otherwise XR would not work. Every render call is using a different camera (the two rig cameras that are the children of the xr camera in case of VR)
I’m sending the position by reference from the current camera something like
shaderMat.setVector3( “refPos”, currentcamera.pos );
Is there any better option to get the current point of view (different for each eye) instead of the camera position?
Thank you for your time!
the camera’s position is updated on every render call for each eye. It is however possible that
currentCamera is incorrect. Would you be able to show a quick reproduction of how you use it on the playground so I can be able to help better?
I mean that I’m passing the camera object position manually so I don’t know how could I read each different ‘eye’ position.
The shader will render twice, one for each eye. If the currentCamera variable is set correctly, the protein will be different each time. Don’t expect a huge change, there should be a different of can 60-70 mm between the two eyes
You mean scene.activeCamera.position? I’m not getting any difference between frames in this sample if the position change it should print it to the console.
You should use the
onBeforeCameraRenderObservable, as the onBeforeRender only runs once a frame, using the parent WebXRCamera:
Babylon.js Playground (babylonjs.com)