iOS - BabylonNative. Method of rendering the camera feed into the background of the scene

I’ve been looking through the code to see what would be the best way to render the camera feed from the device into the background.
I’m thinking about getting the sample buffer from AVCaptureSession and converting to a Metal texture. Then passing the texture handle to create a bgfx texture handle (I’m not sure how to do this yet), then making a plane in the distance and placing the material that is bound to the bgfx texture handle there.
Most of this would be done in the swift or objective c side, and for my purpose this would always be the case (I could figure out how to add the bindings later). I would also retain access to the Metal texture handle to feed it to another module besides Babylon.

I found a JS playground that did this https://www.babylonjs-playground.com/#853M7X#12 with the webcam.

If there’s Native code that can do this or any recommendation would be greatly appreciated.

Pinging @Cedric

Hi @imbrig and welcome to the forum!

Yes, I think it’s the way to go.
Basically, create a plugin that exposes 1 function. This function would create the texture and return a javascript texture object.
That’s what is done with nativexr:


Objectivec++ plugin example here:

And the JS object returned for a texture:

Feel free to share your progress/questions with us :slight_smile:

1 Like

@Cedric Thank you for the quick reply, due to some things that popped up I’ve just been able to follow through on your advice.
I went ahead and shamelessly copied HttpRequestApple to my class NativeCameraApple and made 2 functions that should just send a string to obj-c and to js.


Unfortunately I was not able to figure out the binding to JS. I think NativeCamera is the object that is exposed to JS, and that some how I need to add the hooks to babylon.max.js. Is it in a CMake script? How can I go about creating the bindings? Thank you in advance.

To extend JavaScriptCore with your own object, you have to declare the constructor. It’s done here :


And define the protocol:

And everything is magicaly exposed.
So, for NativeCamera, something like this:

- (void)extend:(JSGlobalContextRef)globalContextRef:(Babylon::JsRuntime*)runtime {
    _jsGlobalContextRef = globalContextRef;
    _runtime = runtime;

    JSContext *jsContext = _jsContext = [JSContext contextWithJSGlobalContextRef:globalContextRef];
    jsContext[@"NativeCamera "] = ^{
        return [[NativeCamera alloc] init];
    };
}

Add a HelloWorld method that prints ‘Helloworld’ and try to call it from JS. If it works, then everything is done.
You don’t have to hook it with babylon.max.js. Only expose your object and use it in your scripts. Babylon.max.js is the engine that exposes in own objects in JS.
I suggest you to try to make it work by addind your nativecamera.h/.mm in the playground project.
Once it works, I can help you setting a cmake or xcode project.

Thanks for the reply, I was missing this line jsContext[@"NativeCamera"] = ^{return [[NativeCamera alloc] init];}; once added it worked and was sending string messages from native to js and back.

Now I’m stuck on the function that would return Napi::External<TextureData> or a Napi::Value or something to the JS side.
I have this function getTextureFromObjC


I’ve already have a test MTLTexture created from the native side, and created a TextureHandle then passing that to overrideInternal to set. Now I just need to convert that bgfx::TextureHandle to Napi::Value or something I’m missing?

Something is missing on the babylonjs in order to inject a native texture into a JS texture. I can handle that.
You should not rely on bgfx texture. We might change it for another renderer. This brings another question: how do you fill that texture? Do you get a byte array or a metal surface? or something else?
Depending on your simplest type here, I’ll do the corresponding interface.

For reference https://navoshta.com/metal-camera-part-2-metal-texture/
Basically I would get the CMSampleBuffer from the AVCaptureSession and covert to a MTLTexture. It’s still preferred (to myself maybe not to others) to create the MTLTexture (GLTexture, VKTexture handle on other platforms) on the plugin side and just pass the converted handle object to JS when queried.

I can still circumvent passing the camera feed texture to JS and do what I need in the obj-c plugin / c++ renderer side but it would make the plugin dependent on the renderer which does not allow the flexibility of swapping renderers out like you mentioned.

Thanks again for the reply.

There is an implementation of texture replacement using OpenGL in NativeXr that can be used as a reference: