Has anyone successfully used web workers for a game world, model, physics, networking, AI, etc?
I know there is the BJS web worker implementation for collisions, but what I was looking at was offloading everything but the rendering (visual representation) of the game to web workers.
I know it is different than traditional threading too. Data is copied between threads, so it can be an expensive operation. So I’m asking for any community wisdom
Has anyone done it?
What patterns would you think to use to break up the workers and copy minimal data between them?
I don’t think any modern devices are single core these days, so makes sense to use at least 1 extra core? Maybe simply 2 workers would be optimal? 1 for rendering, 1 for everything else?
I’ve used webworkers for terrain generation with limited success. The processing needs, at least when i tried it (2 years ago) have to be pretty large before the worker is worthwhile. Likewise, if the worker produces too much data that needs to return to the main thread, it’ll choke on the way back. Perhaps if one could keep that data on that worker, or bring it back in tiny pieces, one could fully avoid any hitches. JavaScript has improved a lot over the last 2 years though, so perhaps I should revisit webworkers…
I wouldn’t recommend it for networking. I say this just because a nicely networked websocket game using binary is very cheap on the cpu.
Also the webworker, regardless of task, introduces the need for the event loop to tick at least once before data can be used which can be difficult to work around. For example if we send off a task on frame#4 of our game, it doesn’t matter if the webworker finishes the task in a nanosecond, we won’t hear back from it until the javascript engine ticks again, which puts us into frame#5 (or later) before we have the data. This ever present delay of one frame pretty much relegated webworkers to non-realtime work for me. I write quite a bit of performance code, and I’ve always found an algo or a data structure that can get me whatever I need within a single frame.
Maybe someone else has had more success and will chime in. For me I found them plausibly viable for large delayed processing tasks, and fairly abysmal for everything else.
I was wondering how it would go synchronising back to the render thread.
Sounds like it will be a lot of trouble to try and get it all just right.
And that frame delay between any worker messages would be a killer! I hadn’t considered that. I just assumed it would fire the messages between workers as soon as possible, and there would be multiple “ticks” between frames rendered.
Well it is true that there are multiple ticks of the underlying event loop between the ticks of the game loop. Like if we’re using requestAnimationFrame or setInterval and running our game logic at 60 fps, these ticks are ~16 ms apart. Meanwhile things like mouse/keyboard input, websocket messages, and these worker thread replies are occurring in the idle time in between game updates. It is just that they are async, so the data isn’t available during the same macro game tick that spawned the work.
So I wonder if you could split the game loop into several asynchronous tasks. Really depends on the work being done if it would be of any benefit I suppose.
I’m not really familiar enough with how multi threaded game engines usually are architected.
We had a version computing collision on workers for a while. Unfortunately the communication and synchronization is often bringing more pain than good at the moment.
I wish we could have all of that in the browsers in an easy to use and consistent way.
I’m using a single worker for native AmmoJS physics. I’ve chosen to do so, to separate the render loop from the physics loop, in case I need a more dynamic world down the line, in which case I’ll need all the performance possible. A worker is indeed nice to not block the render thread, but does come with some drawbacks. One is of course the added complexity of the code, since you need your physics to run independably of the rest of your world, and therefor have to keep track of both visible meshes, and their corresponding physics bodies.
Another, and possibly the biggest issue is, that the communication between the main thread and the worker thread isn’t quite real-time. I’ve messaged BitOfGold about this issue on the old forum, since I read that he implemented worker-physics as well. Whenever I try to measure message-delivery-time, it seems to work at a steady 16ms, however, updating the meshes with new information still seems jittery. His conclusion was that some kind of buffering happens to the messages. I’m simply using setTimeout or setInterval on the worker thread to step the physics, and then send the updated state to the main thread.
SharedArrayBuffers are slowly coming back: Can I use... Support tables for HTML5, CSS3, etc
This doesn’t prevent the jitter completely for some reason unknown to me, but it makes it less noticeable. All you can do to smooth it out, is interpolate some amount. This has the disadvantage of making the simulation seem a bit less accurate, depending on interpolation amount.
So when you create your worker, check:
If SharedArrayBuffer is supported, use it.
if not, check if transferable arrays are supported, if they are, use them.
if not, use postMessage with object cloning.
Did you measure that message delivery time in a Babylonjs project
or as a standalone test?. It’s just so odd that it is the same
rate as 60fps. I was imagining to get a good sync that the main
game loop would need to be split into several asynchronous phases
to prevent it being locked for each render step and having to wait
until the next step. I’m not sure if that would actually improve
the responsiveness though.
I’m sure there has gotta be a winning formula though :).
Or, we just wait until web assembly can access the webgl context
directly and can be threaded and all of this would become
redundant