Current state of secondary views/WebXR recording

I took a quick look at that tutorial and they are passing the image data from the React Native JS context to the native context over the classic React Native bridge by base 64 encoding the image data to a string on the JS side and then decoding it on the native side. This will definitely not work for a real time camera feed as the React Native classic bridge is asynchronous and very slow. You could pass frames from JS to native in real time using JSI, which is part of the React Native TurboModule system. This is too complex to explain in a forum post, but I can point you to some other resources if you are interested.

Also, are you actually wanting to do this for an XR session, or do you just need a regular camera feed? I’m not sure if you are just looking to combine Babylon React Native and OpenCV, or if you are specifically trying to use OpenCV in with a Babylon React Native XR session.

First of all thank you a lot for taking time to look into it. At the moment real time processing is not mandatory, but would be much more user friendly and helpful, so please some blogs or documents about JSI would be great. Also way of getting base64 string from camera feed without meshes would be great too, but any data format will do for now. I need to combine hit test and plane detection with some light-to-medium image processing, so XR session is mandatory.

Ok, I think there are two main challenges then:

  1. Access to the camera image from the JS context before Babylon renders the scene onto the render target texture of the frame buffer. I don’t think there is currently a way of hooking in at the right time. Any thoughts on this @bghgary?
  2. Assuming you have your hands the camera image data that has been copied out of a texture in graphics memory to say an ArrayBuffer in main memory, then you need an efficient way to pass this to native so OpenCV can process it, which is where JSI comes in. This is probably where you will find the most up-to-date documentation and samples: Discussions · reactwg/react-native-new-architecture (github.com).

Not sure I have any answers. It sounds like we just need access to the camera directly (and thus this doesn’t involve XR at all)? Is that correct?

Hi @bghgary,
I need XR for hit test and plane detection, witch means phone has to be in XR session. So you are saying i can use phone camera despite phone being in XR session? I didn’t manage this with WebXR, will try it now. Or did you think of some other (babylon?) camera?

Oh, I see. You need to be in an XR session and access the raw camera feed. Basically this then? I’m not sure which browser (if any) support this yet. For Babylon Native, we probably can follow this spec to implement something, but it will be a feature request.

Exactly what i need, though a snapshot without Babylon Mesh-es would do too if its possible? Do i need to submit request or something? Is there anything i can do about it?

If it’s for Babylon Native or Babylon React Native, you can file a feature request issue in its respective repo, but I can’t say how fast they will be implemented. Would you be able to contribute?

It is for Babylon React Native, ok, i will file a feature request issue. If code is written in clojure/java/python/rust i will help, but i think its C++?

1 Like