What are the options for live remote scene synchronization either between two browsers or between a bowser and Node.js server unning NullEngine instance?
After some research I found only answsers that suggest to use some 3rd part game networking library like Real-time Multiplayer with Colyseus | Babylon.js Documentation or plain WebSockets but I assume that would require to implement every possible change that can be happen with the scene objects and their properties as separate command/function to be transfered which would be very time consuming and prone for engine API changes.
I’ve seen some integrations of Babylon.js and Blender (e.g. A blender addon for babylonjs scene viewing - #5 by Heaust-ops) that are either using the above aproach or they are storing-transfering-restoring the scene in GLTF format which is a performance nightmare.
I am exploring an option if Reflector cannot be used for this but so far I didn’t managed to get it working.
If it’s one-way, one browser can broadcast and another browser can watch it (a bit of a trick where only media values ​​are transmitted in the form of webrtc).
If it’s two-way, it needs to look at a null engine that was started with a socket (load a list or event type from the socket to the null engine), and then bring the status to the browser of the connected user and draw it individually, and when the operation is executed, process it in the socket and execute it in common.
As labris said, you need to create an observer for the action and maybe put constraints on the action other than that.
or, it seems like it would be possible to connect directly in a form like p2p and transfer each value to copy it, but the problem is that errors in timing seem to occur more often than with sockets.
Thank you both for your replies. This suggested way based on observers with WebRTC transfer of payloads is certainly something aligned with what I’d like to achive. On the other hand, where are the biggest gaps of mine is whether there is a way how to implement this in a generic way so I don’t need to implement every possible change that can happen with a separate handlers? E.g. Is there already a way to take the observer callback payload, transfer it to the other end (e.g. via WebRTC) and then on the other side just let the engine interpret it without manual handling? Is there a way to subsrcibe to “everything” with a single line? Furthermore, is there already a way how to batch payloads from dozens of observers trigerred during certain period of time? Or do I need to implement all of this myself? In other words, the tranfer itself is not a problem, the problem is interaction with the engine on both ends before and after the serialization of the “commands”.
As far as I know, there is no such thing as a “single line”. The conditions for generating a “single line” are ambiguous (the format varies depending on how the command is used).
For example, if you create an engine on a site called A and allow it to modify the code itself (close to live coding), then you can actually synchronize it by periodically saving the entire code and having other users periodically load the code and keep it readable.
Unlike number 1, if there is a specific transformation or event on site A (let’s say it’s a car painting system site), then you need to issue a “command” to synchronize each event.
If you want to complement 1, consider also syncing camera movement tracking and mouse movement with observers. For example, Figma Web comes to mind.
Or if you manipulate something with the inspector, it will be difficult to track it.
Thanks. Yes, live coding, namely live vibe coding through MCP server running NullEngine is my intended use case. Syncing the entire code for every change would have performance implications I want to avoid as well, so it should be incremental.
Maybe it’s acceptable to start a screen capture to transfer bitmap from pc 1 to pc 2, and capture mouse and kb events on pc 2 and send to pc 1 and simulate it there.
(Or just use VNC)
Yes, I am going to use this approach to embed full featured vibe coding capable IDE running on PC inside of the VR session on the headset. Nevertheless, the question is rather related to communication and scene state synchronization between the MCP server running on that PC and Babylon.js engine running on that headset and rendering everything what user sees.