Running on server

Hi Team,

I have just completed a game, which is a bit resource hungry due to multiple textures atlas being set, I have optimised as much possible according to guidelines available in docs. I was wondering if its possible to render the game on server then stream to user complete scene, and to make it interactive, so user can click around, but the real thing is running on server.

is that possible?

I have read about NullEngine but I am not sure if I am going in right direction , could someone please help. Thanks

it is possible but not ideal:

1 Like

I think he means pixel streaming, like sending the render frames to a video or map element and forwarding user events to the server

This thread has some good info for a downlink setup.

.Ideas on beefy offline render? - #6 by brunobg

For uplink, you will have to forward events from the client to the server.

Getting this working would be awesome, because you could use babylon without arbitrary limits imposed by chromium. Uber is doing something like this with their gis analytics

This could be worth checking out, as a kind of replacement for puppeteer
.https://github.com/node-3d/glfw-raub

To complement my previous answer which was to a slightly different question: pixel streaming is possible these days. WebRTC is “fast enough” for some definitions of enough. There are a bunch of 3rd party services that handle this for you: you upload some app there and they handle the streaming (down video, up keys/mouse events). It’s a great solution for running something sophisticated or intensive when you have no control of your client, but it’s of course quite expensive. You could handle these things yourself, but unless you are running just one client at a time or something like 1 million it’s not worth handling all the devops and scaling yourself.

Web transport shipped, w00t

This site is useful

Any open source? I would like to run a desktop version. As in, my pc to my browser

I’m not aware of any open source packages. If you want to do the PC → browser though, it’s not too difficult (but quite annoying). You essentially need to write a small JS that captures the screen (getUserMedia), and then you can broadcast it to your target browser. Take a look at Peer-to-peer communications with WebRTC - Developer guides | MDN. Handling the key/mouse events is harder. As far as I know there’s no easy way to do it. You’ll need a server to send the events to, and that server will need some OS calls to forward the events to the application. It’s not worth the hassle for personal use, just use some VNC-like app instead.

I mean browser on the same device. Like running a local only stadia background service

If you’re in the same device, getUserMedia() will work and get you a stream of any window or desktop. Very easy to implement. But it won’t send keystrokes or mouse events.

I’m doing such a bad job at describing things, sorry. I’m thinking something like nodejs/babylon where babylon’s engine context is using some gui-less non-chromium context like headless-gl or glfw. then, read from that context into ffmpeg and send it to browser into a video element. the purpose is to get around chromium’s resource limits without having to recompile chromium. so, not a web app, desktop only. maybe a glfw window with js bindings and cef embedded would be better, like put chromium as a child instead of the parent. hmm

I see what you want. Well, WebRTC would work for you if you can get it into your render process. The setup would be something like this:

  1. headless rendering process opens. It starts rendering and starts WebRTC.
  2. your client browser connects throught WebRTC with the chromium process. Since you’re on the same computer, just connect to the localhost and you don’t need STUN/TURN/etc or even any signalling to get the P2P connection.
  3. Since you have control on both ends, you can capture key/mouse/etc events on the browser and send them to the rendering process, through a webrtc data channel. I never tested how fast this would be, but a quick search seems to indicate that other than a few glitches it should be pretty fast.

But I don’t really get the advantage of running a headless browser-like app on background and transmitting it back to the browser on localhost.

Just dreaming really, but it could allow you to use the full power of your device instead of being constrained to 4gb ram, sanitizing gpu io, yielding async time to browser extensions, etc etc. godot is kind of architected like this btw. Idk if it matters, but chrome just shipped web transport in v97. Its webrtc without stun and works in workers. I was doing some research on how localhost / loopback on widows works, it seems to be very fast. Even tcp gets handled in the os directly, idk maybe webrtc / web transport wouldnt have any benefit on localhost. That would be nice

I read about web transports being released. Looks great, even though it’s Chrome-only right now. I don’t know how easy it’d be to integrate it in a non-web application.

But again, if you are rendering in the same computer and you can run native, I don’t see the point of sending the stream to a browser. Can you point me to the godot docs about this? I know it supports WebRTC, which make it easier to do the sort of pixel streaming we are discussing.

Pixel streaming is useful, particularly when you need to render big scenes and for some reason you don’t want to do that locally (you don’t want to distribute the code or even the binaries, you want to see results even when you’re not using a powerful client, your data is so huge it’s not reasonable to transfer it around). If it’s all in the same machine though, I miss to see why it’d be useful (but would love to understand why).

That said, I wish browsers were a better platform for 3D. JS thread models are almost useless for 3D, XR is coming out so slowly. At least webGPU is coming out.

.Introduction to Godot development — Godot Engine (stable) documentation in English

I read a blog post from the creator some time ago about it. The server layer was designed to be the abstraction layer.

if you are rendering in the same computer and you can run native, I don’t see the point of sending the stream to a browser

Well, when i say browser i really mean a web view. Im assuming sending the gpu output to a video element has very little overhead, so its kind of a best bang for your buck solution. I mean, another option would be to recompile chromium with some added api surface and remove some of the constraints but that is on another level of unapproachable. I dont even know if you can legally do that because of the licenses for underlying third party stuff like ciscos network stack