This isn’t directly about BabylonJS (although the VideoRecorder might have to support this), but does anyone know the current state of WebXR video capture, particulary with AR?
Secondary views were supposed to solve this, but I can’t find up to date information about whether they are already available on browsers (there are some commits for Chromium and Servo), usable and can capture the camera view and not only the 3D view.
I am not aware of any browser implementing this (i believe it is still very unstable). Babylon’s architecture don’t support secondary views at the moment, but it should be very possible, creating a camera per view and adding both cameras to the activeCameras array. If you want to create a github issue for secondary views, I will be more than happy to look into it when I get a chance. Won’t come in to the 5.0 sadly.
A small note about AR screen recording - the camera feed will not be a part of any export directly from WebXR (until the browsers will allow that). At the moment the best way is to screen-capture your device (if it is an android).
In Babylon Native, there is an undocumented global class called NativeCapture. You can instantiate this class and pass in a frame buffer to the constructor, then you can call addCallback to register a callback that is called every frame with the raw rgb data for each rendered frame, then call dispose to stop capturing frames. This is the contract for NativeCapture:
type CapturedFrame = {
width: number;
height: number;
format: "RGBA8" | "BGRA8" | undefined;
yFlip: boolean;
data: ArrayBuffer;
};
type CaptureCallback = (capture: CapturedFrame) => void;
declare class NativeCapture {
public constructor(frameBuffer: unknown);
public addCallback(onCaptureCallback: CaptureCallback): void;
public dispose(): void;
};
You could (for example) instantiate one of these like this: const nativeCapture = new NativeCapture((camera?.outputRenderTarget?.renderTarget as any)?._framebuffer);
If a camera is not provided, then the default (on screen) frame buffer will be captured. As you can see, doing this can require reaching into Babylon.js internals. NativeCapture is not really intended to be used directly right now, rather it is a building block that should be used by a future Babylon.js abstraction for capturing screenshots and videos that works in both the browser and in the context of Babylon Native.
NativeCapture only provides raw frames though, it does not provide support for encoding these frames into a video (such as an mp4), so currently you would have to do that yourself.
In Babylon React Native, there is a thin wrapper around NativeCapture to make it slightly easier to use. It is called CaptureSession and is constructed from a Camera (optionally) and a frame capture callback.
So in summary:
You can get raw rendered frames as rgb, but this is really experimental right now and not intended for direct usage and will probably change in the future.
Given the raw rgb frames, you could encode a video yourself, but Babylon Native does not currently provide this for you.
Yes, it includes the camera if you start with the correct camera. On a phone (e.g. Android/iOS), it is mono rendering (as opposed to head mounted displays that use stereo rendering), so in the case of a phone you’d use WebXRCamera._rigCameras[0] (and just for more complete understanding, in the case of stereo rendering on an HMD it would have two elements in the _rigCameras and you’d have to decide which you wanted to capture - the left eye or the right eye).
And what are you trying to do with the data? Do you think you’d be wanting access to the texture (before the scene is rendered on top of it), or are you looking for the rbg pixel data in an ArrayBuffer, or something else?
I need to pass camera data without meshes to the OpenCV (imorted opencv as java module), so any of those will do. I would like to hear about all options so i can test performance between them. What do You think will be optimal for performance in my case?
I think any potential options will depend on a few more details. When you say “imorted opencv as java module”, do you mean you are using some React Native module for OpenCV? If so, can you point me to more info on this? If the image data is going from the React Native JS context to a native context, then how that image data is passed between those two contexts is important.
Ok, I agree. I imported opencv following this tutorial How to Use React Native & OpenCV for Image Processing and created my functions for processing images witch i call from JS. So yes, image data is passed from js to native side. From data i create OpenCV Material to process it further.