Current state of secondary views/WebXR recording

This isn’t directly about BabylonJS (although the VideoRecorder might have to support this), but does anyone know the current state of WebXR video capture, particulary with AR?

Secondary views were supposed to solve this, but I can’t find up to date information about whether they are already available on browsers (there are some commits for Chromium and Servo), usable and can capture the camera view and not only the 3D view.

1 Like

cc @RaananW

Secondary views!

I am not aware of any browser implementing this (i believe it is still very unstable). Babylon’s architecture don’t support secondary views at the moment, but it should be very possible, creating a camera per view and adding both cameras to the activeCameras array. If you want to create a github issue for secondary views, I will be more than happy to look into it when I get a chance. Won’t come in to the 5.0 sadly.

A small note about AR screen recording - the camera feed will not be a part of any export directly from WebXR (until the browsers will allow that). At the moment the best way is to screen-capture your device (if it is an android).

1 Like

Would Babylon Native help with this? Does it currently run native *XR and can capture the screen directly (Android/iOS)?

@bghgary - can babylon native help in this case?

This might be useful info for WebXR: raw-camera-access/ at main · immersive-web/raw-camera-access (

Yes, this can be done with Babylon Native. We have access to the camera frames and they can be recorded. Maybe @ryantrem can give more details.

Thank you, I’ve been looking at the specs. But do you know if any browsers support that yet? As far as I could see none do.

@RaananW knows way more than I do and already answered. :slight_smile:

1 Like

There is experimental support for this in Babylon Native, but it seems like the question is browser specific.

I’m actually interested in the current state of Babylon Native for this as well.

In Babylon Native, there is an undocumented global class called NativeCapture. You can instantiate this class and pass in a frame buffer to the constructor, then you can call addCallback to register a callback that is called every frame with the raw rgb data for each rendered frame, then call dispose to stop capturing frames. This is the contract for NativeCapture:

type CapturedFrame = {
  width: number;
  height: number;
  format: "RGBA8" | "BGRA8" | undefined;
  yFlip: boolean;
  data: ArrayBuffer;

type CaptureCallback = (capture: CapturedFrame) => void;

declare class NativeCapture {
  public constructor(frameBuffer: unknown);
  public addCallback(onCaptureCallback: CaptureCallback): void;
  public dispose(): void;

You could (for example) instantiate one of these like this:
const nativeCapture = new NativeCapture((camera?.outputRenderTarget?.renderTarget as any)?._framebuffer);

If a camera is not provided, then the default (on screen) frame buffer will be captured. As you can see, doing this can require reaching into Babylon.js internals. NativeCapture is not really intended to be used directly right now, rather it is a building block that should be used by a future Babylon.js abstraction for capturing screenshots and videos that works in both the browser and in the context of Babylon Native.

NativeCapture only provides raw frames though, it does not provide support for encoding these frames into a video (such as an mp4), so currently you would have to do that yourself.

In Babylon React Native, there is a thin wrapper around NativeCapture to make it slightly easier to use. It is called CaptureSession and is constructed from a Camera (optionally) and a frame capture callback.

So in summary:

  • You can get raw rendered frames as rgb, but this is really experimental right now and not intended for direct usage and will probably change in the future.
  • Given the raw rgb frames, you could encode a video yourself, but Babylon Native does not currently provide this for you.

Thank you @ryantrem for the detailed answer. In an AR app, do you know if this captures the camera too?

Yes, it includes the camera if you start with the correct camera. On a phone (e.g. Android/iOS), it is mono rendering (as opposed to head mounted displays that use stereo rendering), so in the case of a phone you’d use WebXRCamera._rigCameras[0] (and just for more complete understanding, in the case of stereo rendering on an HMD it would have two elements in the _rigCameras and you’d have to decide which you wanted to capture - the left eye or the right eye).

1 Like

Hi @ryantrem, do you know if there is a way to get phone camera data without Babylon objects?

  1. Do you mean in Babylon Native, or Babylon React Native?
  2. Do you mean on the native side, or on the JavaScript side?
  1. Babylon react native
  2. JavaScript side

And what are you trying to do with the data? Do you think you’d be wanting access to the texture (before the scene is rendered on top of it), or are you looking for the rbg pixel data in an ArrayBuffer, or something else?

I need to pass camera data without meshes to the OpenCV (imorted opencv as java module), so any of those will do. I would like to hear about all options so i can test performance between them. What do You think will be optimal for performance in my case?

I think any potential options will depend on a few more details. When you say “imorted opencv as java module”, do you mean you are using some React Native module for OpenCV? If so, can you point me to more info on this? If the image data is going from the React Native JS context to a native context, then how that image data is passed between those two contexts is important.

Ok, I agree. I imported opencv following this tutorial How to Use React Native & OpenCV for Image Processing and created my functions for processing images witch i call from JS. So yes, image data is passed from js to native side. From data i create OpenCV Material to process it further.