I was waiting to post this when my code was already done, but with this great Golden Paths for Babylon.js! that was posted today, I decided to post this a little early since it might become one of the golden paths.
So I wrote a SCUBA dive simulator using Babylon. The simulator is ALMOST finished, but there’s still some work to add fish schools and fix a couple of issues. I’ll post it to the projects category once it’s done. It’s a pet project that wants to bring visibility to a beautiful and historically relevant marine park, with this website, that deserves to be better known and visited. Read about it there. Babylon is used all through the website. Pretty much every page has a BJS component, from the landing page ocean effect to 3D models of fish.
There’s a whole post in that site about the code and how everything was implemented for those curious about it, which includes a lot of Babylon stuff – and that can perhaps be part of one the golden paths mentioned here.
But I wanted to make a post-mortem post specific about the experience of using Babylon for that project. What worked and what didn’t. This post is basically a list of “IMHO” (in my humble opinion, or not so humble), by the way: it’s a personal tale and comparison with my experience of building 3D stuff over many years.
This project started a while ago, and the pandemic paused it – which is why I made a few posts and contributions last year, then disappeared for a while.
First of all, let me start with the reason I picked BabylonJS over ThreeJS: animations didn’t work with ThreeJS when I was doing a post-processing multi-pass rendering for caustics. Perhaps this bug was fixed after all this time, but essentially the second pass with a different material wouldn’t apply bones. But back then, after writing some ThreeJS code for the project and thinking this bug was not trivial to fix (details are fuzzy; I think I posted an issue at the time in their GH) I decided to give BabylonJS a test and things worked well. I had a previous experience with BabylonJS, but loooong ago.
Well, it turned out that multi-pass rendering was not so easy to do in BJS too, but I got a lot of help in the forum and any issues I had were immediately fixed. This is one thing I want to say, and in bold: the community here is awesome. I’ve contributed to several OS projects over the years and this is by far the most welcoming one. Bugs are immediately taken care, maintainers apparently never sleep and are always answering questions here in the forum, and any code contributions are welcome and carefully reviewed with all the help you need to fix the problems and a bit more. Thank you, particularly to @Evgeni_Popov (who not only is my Personal Babylon Support Person, but also constant amazing help with my PRs) but also to @sebavan and @deltakosh. You all rock. Thanks for the patience and help.
For someone who used a ton of 3D engines over the years, Babylon is a breeze. The documentation is excellent and the playground makes things easy to test for a new comer. Plenty of practical examples, and the forum works very well as a dynamic form of documentation. Technically, Babylon also does something that I really value in frameworks: it mostly stays out of your way, and if you need to get over it to do something unexpected or low level you can almost always find a way to do it. I didn’t need to get down to WebGL code, but I got around almost everything I needed – with one exception that I’ll mention below.
The first shock I had with Babylon was, however, its default performance. The same scene from ThreeJS, naively converted to BabylonJS, was much slower. I know, docs go over basic optimizations pretty well, but it ends in a lot of
freeze calls all over the place. I missed having a “by default everything is frozen” setting. I understand the choice, but it certainly results in people thinking “how slow” and never reading the docs to understand why. There are lots of internal tools to get around the main performance issues (thin instances, SPS, etc) that were very helpful, but it takes some diving to find them (like realizing, oh, there are thin instances and not only instances). Somehow the optimizer never did a significant job for me, so sprinkling the code with freezes and merges was part of the job. But what I really miss is a more high-level API, something like
mesh.completelyStatic(), which would freeze everything for me: mesh, material, normals, matrices.
Once I got around I was back to the usual “JS has a crappy multithread model and this is the root of all slowness” problem that I was used to. I’m glad that WebGPU will apparently help this and it’s finally coming into production in browsers soon. But I was always bumping into “can’t do that on the CPU because it’ll be too slow”, and adding code that suddenly dropped my frame rate all through the project, always wondering if it was a missed freeze call or a more serious problem. This meant finding some creative ways to handle things and constantly doing checks (and cursing Firefox, which slows down after several page reloads, leaking who knows what). This is very boring: finding out why there are too many draw calls and what could have been frozen and wasn’t. The new performance profiler seems like a giant step to improve this, particularly if it can help pinpoint the bottlenecks at a higher level so it’s easy to fix them. I’m looking forward to it, because profiling and optimization are normally boring, but BJS doesn’t help much. My go-to way to find bottlenecks is commenting large chunks of code and seeing what slows things down the most. And my phone has much different bottlenecks from my desktop, which makes this painful. A better model to handle platforms is also something I miss: my phone is GPU bound, my desktop CPU bound. As I optimize things I’m starting to consider LOD that is tied to the GPU performance, for example – so annoying to implement.
Still on the performance issues, I was happy to contribute with the Vertex Animation Texture code, though it was only possible from the PoC @raggar orignally wrote and the EXTENSIVE code review from maintainers, particularly @Evgeni_Popov, who was way more relevant than I for that PR. So I can barely take any credit for it. Thank you so much, again!
The limitation I mentioned above was “how to patch shaders”. You have a material that you want to change slightly, but BJS didn’t provide a way to modify its shader code. I ended up doing a multipass RTT (and bothering everyone so much here that I felt obligated to write the doc page about it), but I mentioned this issue in the VAT discussion and it was very well received. It led to the material plugin manager code which is coming soon. I love that PR, but again the credits should go to the maintainers who wrote the core of that change. I’m just happy that my little dive project resulted in some improvements to BJS that I think are relevant, but they only happened because the maintainers are so awesome. I’m pretty sure they just bang their heads on the wall whenever they see another one of my posts or PRs
This is of course another difference between using a web 3D engine and something like Unreal or Unity: the amount of technology is lagging a bit – and of course, the limitations of size, money, hardware, browser limitations, number of developers and architecture are huge. Comparing these projects is apples to oranges. And it’s not an issue if you are doing something simple instead of a AAA game. The convenience of running on a web page in my opinion trumps the difficulties. But there’s a line where a smaller project could benefit from being web based and things are much harder here.
One thing I’d very much like to have more of are plugins and assets. Finding code for the ocean was hard, for example. The old ocean post-process code was removed from BabylonJS for license issues, but it was pretty much the only realistic code around. Even so, the original shader was only for above the water. I ended up changing it considerably and making it work for underwater too. (BTW that new ocean for WebGPU looks awesome!) Anyway, it’s all part of the job, but it took me a few extra days of work to make the ocean work properly, implement the underwater changes, etc, particularly because the original shader code is quite cryptic. So I was almost writing it from scratch, from fixing small issues and implementing the underwater area, and it still doesn’t work properly if you are very close to the water line, which requires handling the ray marching in a better way than the original implementation – and there’s no reference implementation of a ray marching shader for BJS, for example, which would have been helpful.
So here’s one suggestion: I’d like to recommend to this project creates a standard way to write plugins, perhaps with a template plugin repo, already handling all the nasty JS stuff and making it easy to test with the PG, and somewhere in the main site listing the plugins. Perhaps the same thing for materials: the NME is nice but there’s no catalog of interesting materials.
I can make a good comparison because a friend of mine is building a similar to project in complexity to mine, but he’s using Unity. Besides the asset store helping a lot, another difference is that he relied heavily on the GUI to build his scene. I’m very comfortable with the IDE and have been writing CG code for years now, and I don’t miss the GUI: the inspector is plenty to debug and make minor real time changes. The editor looks great, but I found it after I was already coding and it was not clear how to use my custom code that handled all the performance gotchas with the editor. The inspector was very helpful, but I ended doing some work on Blender. It’s been ages since I last did any 3D modeling, and I never used Blender extensively even then. So any task that involved positioning assets or checking textures involved opening Blender, scratching my head to figure out how the hell I do something trivial etc. I am sure to a lot of people an integrated graphical editor is essential.
That “almost” I mentioned above is one important problem in BabylonJS that is getting fixed: standard and PBR materials are not easy to change, because the shaders are fixed. This means that if you need a simple change to the shader, you are in bad luck. I could have added caustics to basic materials, but I had to to a separate pass for them and post process to merge it. This would have let me avoid the caustic multi-pass, and would have made the VAT implementation much easier. I’m so happy that it got picked up and will be released soon.
Another relevant issue was debugging shaders. Man, it’s 2021 and it is still awful to do. I know the whole CPU/GPU/communication blah, and this isn’t a BJS specific problem, but boy do I miss a
print() or step debugger sometimes. Painting pixels to debug things is so horrible. I may have missed some interesting tools here, but what I’d like: 1) a print()-like-helper, that perhaps saved data to a texture. I would be happy to select a single pixel to get data out from. I don’t think that’s very hard to do, and my approach would be to allocate a texture and just dumping characters there. I know there’s no
string in GLSL, but I’d be perfectly happy to just print the basic GLSL types and get them back in order and printed to the console. A
print(vec4) would have been SO USEFUL, and I was too lazy to setup output textures to do it myself. 2) a step-by-step GLSL simulator. glsl-simulator was written and abandoned, which is a shame, because it’s pretty much what I’d like to get.
Performance optimization was also painful, like I wrote before. My dev machine is pretty good, so 60fps in it could very well translate to 15fps on my phone, which is not too shaby either. I wrote code in a way that made it easy to comment parts and quickly see what was slowing the app. It’s painful. Little things were always causing issues (add thin instances on mobile, sudden drop to 30fps, why? GPU bound? Or the CPU animation code? Let’s sit down for yet another profiling session). I am looking forward to that performance profiler that is coming with 5.0.
Still, finding out what is slowing down the GPU is awful. I’m thankful that mobile debugging is such a breeze these days with remote debugging, because otherwise it’d be par with debugging microcontrollers using a single LED. The GPU is still a blackbox.
SpectorJS is usually too low level to help with scene optimization, so my go-to technique was really “comment/test/loop”. Also, I was surprised to get the GPU as the bottleneck in this application, something that happened on mobile, which is not my usual development target. I was expecting to perhaps hit a memory limit, but be CPU bound in pretty much any platform with a 3D chip. Is it my doing something silly? Have I pushed too many polygons, or is it textures?
Most of the development time in this project was spent on things that were not trivial to do (like how do I render caustics with a multi pass) and performance optimization. But a lot was also spent in small annoying things. From webpack stuff to animating boids (had to write my own code for that, too). It’s a very simple project in its complexity, and when I began it I didn’t expect to spend so much time optimizing things. I am not blaming BJS for this; JS and WebGL are much to blamed. I wouldn’t have had half of the performance issues if I were developing a native application.
But I want to finish saying that all in all my experience with Babylon was very smooth. I’d definitely use it for future projects. I’m particularly looking forward to WebGPU coming to production browsers (even though the basic performance won’t be better, from what I read) and WebXR. Hopefully I’ll get a chance to work on new projects using them.
That’s it. Thanks to those who read this long novel!