Any estimates when Babylon Native might be production ready?

The project looks fascinating. I am curious when it might be complete? I see on the README it mentions audio isn’t yet supported yet. Fingers crossed!


We’re using it for a production, commercial iOS & Android app to be released next year. It has all the features we need, and although there are some performance and feature differences between Babylon.js and Babylon Native, it generally works as advertised, and we’ve been able to tailor our code to Babylon Native when necessary.

The thing we’re most looking forward to is support for React Native New Architecture and Concurrency as performance can be a challenge with UI and 3D sharing the same single thread.



Hi @inteja thanks for the reply! Well sounds like a neat use case. I am guessing maybe it’s more of an app and less of a game? I think all of the existing features are good enough with the exception of the missing audio functionality.

Yes our project is an app not a game, although we did require audio and would have liked to have used Babylon Native for that aspect as well, but given that feature isn’t available yet, we use react-native-sound-player.

1 Like

For those that want audio to be supported, please drop a comment in this issue. The more people who ask the more likely we will implement it. :slight_smile:

Using Babylon Native for a game might be challenging on iOS (due to lack of JIT) depending on the game. See this for how to choose.


Hi @inteja

Do you mind sharing what scale of app you have in production? Like # of vertices and faces in the scene, # of draw calls, and maybe a taste of the scale of computation you have in javascript.

@slin it’s difficult to provide specifics, but our application is for controlling commercial drones/robots, receiving and visualising real-time point cloud data from a LiDAR scanner attached to the payload, as well as real-time first person video feed. There’s a lot of telemetry messages and data being received and processed while a mission is running, so the load on the device is substantial, especially in a single-threaded React Native environment. It depends on the device it’s running on but we can display a minimum of 1 million colorised points with LODs and post processing effects. We generally try to keep draw calls in the low 100s. We have some meshes with faces (like the drone model, grid, icons, paths etc) but most of the overhead is number of points in the point clouds, which can rise quickly. For recent devices like the M2 iPads, we can handle a lot more without frame rate drop.