How can I get the render output of left/right camera in VR mode?

I want to get the final rendered output of each camera when in VR mode, but I can’t figure out how.

In AR, I can get it with no problem using:

const ctx = canvas.getContext('webgl2');
let pixelBuffer = new Uint8Array(canvas.width * canvas.height * 4);
ctx.readPixels(0, 0, canvas.width, canvas.height, ctx.RGBA, ctx.UNSIGNED_BYTE, pixelBuffer);

When I try to do the same in VR, on the “main” camera I get a buffer that only has the background color.

I also tried with left/right cameras outputRenderTarget:

let pixelBuffer = new Uint8Array(canvas.width * canvas.height * 4);
let leftCamera = scene.cameras[1];
pixelBuffer = leftCamera.outputRenderTarget.readPixels();

but with this, i get a buffer that’s filled with 0.

Does anyone have any suggestions on how I can achieve this?

cc @RaananW

can you share a playground so I can understand at what point of the render you are trying to get the screenshot?

Oh, and - could you share your use case? Maybe there is a better solution that this :slight_smile:

It’s a bit hard to share a playground, because I’m using BabylonJS in combination with VueJS and it’s all help together by thoughts and prayers, but to describe it shortly:
I have a VueJS view, where I have the canvas. Babylon is a separate javascript app that I kinda inject into the webpage. When the users presses the Enter button, I execute a function that registers an observer. Inside of this observer is where I’m trying to do this. I’ve tried with a couple of different observers: scene.onAfterRenderObserver, scene.onAfterCameraRenderObserver, renderTargetTexture.onAfterRenderObserver and it’s the same result with all of them.

What I’m trying to achieve is get the final RGBA buffer, preferably one for each eye, which I can then stream to another device on my network. This works great with immersive-ar mode, but I want to do this for each eye.

The outputRenderTarget is a render target texture that should hold both eyes, after the second camera has rendered. What you can try doing is attach to the onAfterCameraRenderObservable and see if you can generate a screenshot there. Otherwise you can try onAfterRenderCameraObservable (a different observable, we are so great in naming), and asking for a screenshot if the 2nd camera is done rendering.

Oh, and the reason I asked for a use case is because you can always turn on spectator mode (enableSpectatorMode on the experience helper), which will render the scene to the canvas (on top of the XR device), based on the options passed to the enableSpectatorMode function.

Thanks for the suggestions, but I’m doing this on a smartphone and I don’t think spectator mode is useful here.
It also requires aditional resources to render another camera. That’s why I’m trying to get texture data since it’s already rendered, but no matter which observer I try the texture is filled with only 0, even if the meshes are visible on the screen.

I’ve played around a bit with your suggestions and I’ve noticed a mismatch between documentation and execution, or maybe I’m not understanding it correctly.

In the docs for “onAfterCameraRenderObservable” it says “An event triggered after rendering a camera This is triggered for the full rig Camera only unlike onAfterRenderCameraObservable”,
but when I run the code


scene.onAfterCameraRenderObservable.add(async (camera) => {
    console.log("CAMERA: ", camera);
    if(camera.isLeftCamera) {
        console.log("LEFT CAMERA");
    }
    if(camera.isRightCamera) {
        console.log("RIGHT CAMERA");
    }
    let vrTexture = await scene.textures[scene.textures.length - 1].readPixels();
    console.log("VR TEXTURE: ", vrTexture);
    for(let i = 0; i < vrTexture.length; i++) {
        if(vrTexture[i] != 0) {
            console.log("DATA AT: ", vrTexture[i]);
        }
    }
});

I get:
onAfterCameraRenderObservable

On the other hand, for “onAfterRenderCameraObservable” it says in the docs “An event triggered after rendering the scene for an active camera (When scene.render is called this will be called after each camera) This is triggered for each “sub” camera in a Camera Rig unlike onAfterCameraRenderObservable”, but here, when i run the code

scene.onAfterRenderCameraObservable.add(async (camera) => {
    console.log("CAMERA: ", camera);
    if(camera.isLeftCamera) {
        console.log("LEFT CAMERA");
    }
    if(camera.isRightCamera) {
        console.log("RIGHT CAMERA");
    }
    let vrTexture = await scene.textures[scene.textures.length - 1].readPixels();
    console.log("VR TEXTURE: ", vrTexture);
    for(let i = 0; i < vrTexture.length; i++) {
        if(vrTexture[i] != 0) {
            console.log("DATA AT: ", vrTexture[i]);
        }
    }
});

i get:
onAfterRenderCameraObservable

It looks like the description in the documentation is switched around.

But in both cases, the texture is empty. Am I reading the texture wrong?