I’m a frontend dev starting my dream gamedev project with Babylon.js, and I’d really like to get my physics setup right from the start - but I’m confused about the best approach.
My goal:
Use a custom GLB mesh as terrain - no heightmaps or subdivisions, to keep map creation manageable.
Have working collisions so the player can walk on uneven terrain and objects can fall and collide naturally.
The vibe I’m aiming for is Baldur’s Gate 3 style (but low-poly and for the web).
I plan to stream map chunks from the server as the player moves to new areas. The camera will be top-down, and I plan to load map chunks from the server as the player reaches map edges, so the world can expand seamlessly.
What I found:
I went through the forum and found some partial examples. The closest is this Playground - but I noticed it doesn’t use any physics plugins (like Havok or Ammo).
However, I keep seeing people recommend using those plugins for proper physics.
My question 1:
What limitations or drawbacks should I expect if I stick with the built-in collisions instead of using a physics plugin?
For terrain + falling objects + simple player movement, is the internal system enough, or will I hit performance or accuracy issues later?
Did I maybe miss a better example that shows a recommended way for GLB terrain with physics plugins?
My current pipeline:
Right now, I use Blender for level design, terrain, animations, and all models.
For the engine side, I use only Babylon.js code — I like having full control and understanding what’s under the hood.
I also tested different tools:
Babylon Editor — helpful but I’m not sure if it’s optimal for a project like this.
Unity to Babylon Exporter (free) — interesting, but the scene ends up as a binary blob, which I don’t like. Also, I don’t have budget for paid Unity plugins, and I prefer to write most of the game logic myself instead of relying on closed scene builds.
Question 2:
Are there any recommended tools, workflows, or plugins for this kind of project — where you want to keep level editing in Blender, but handle physics, collisions, and scene streaming in code with Babylon?
Maybe there’s a better way to organize assets or export workflows that I missed?
I appreciate any pointers - I know Babylon’s community is amazing at filling in the gaps where docs are still catching up.
Currently creating a map in Blender made of default shapes (boxes, spheres, cylinders), then exporting as .glb, then importing into Babylon with Havok’s Mesh Shape. Before exporting, I “Apply All Transforms” to all meshes in Blender (so that the local translations/rotations are zero and scales are 1).
So far, this seems to be working well. Have you seen any issues?
For 1, moveWithCollision is old and pretty limited both in terms of performance and features. Physics V2 will always be a better choice. With tiles that get created/disposed on the fly, I don’t think there will be a big difference between mesh and heightmap physics shape. You’l find on this page PG to create terrain with mesh and heightmap : Babylon.js docs
And you can mix it with the character controller : Babylon.js docs
For 2, I think @PatrickRyan will have much more valuable answers then me
Thanks so much for the links and explanation - really appreciate your time!
Just to clarify my approach:
I don’t want to manually create or maintain a separate heightmap texture for terrain - that’s an extra step that takes more time and is harder to keep in sync.
Instead, my idea is to fully model the terrain in Blender, export it as a GLB mesh, and then automatically generate a heightfield colliderfrom that mesh data.
This way, the mesh is the single source of truth - both for visuals and physics - and the heightfield shape can be generated once during development (or even at runtime or server-side) with the exact same detail.
So basically:
No manually painted heightmap.
One mesh → automatic heightfield → accurate physics.
Ideally automated as part of my pipeline.
Given this, do you think using Havok/Ammo to create a heightfield collider from my custom GLB mesh is the best way?
Or is there a more practical approach for this type of workflow?
Again, thank you for helping me think this through!
@galionix2, for your second question, I would point to the Preparing Assets for Babylon.js section of our docs which has a lot of good information for planning how you are building your assets. In terms of managing the pipeline between Blender and Babylon generating your ground meshes in Blender and then adding physics in Babylon is a great workflow. Leave each tool to do what they are best at and then create a bride between them for the most control over your scene.
In terms of managing everything, I rely heavily on AssetManager to help load all my assets and easily move them in and out of the scene. Having one common callback once all assets - meshes, textures, shaders, particle systems, node geometry, etc. - are done loading is super helpful.
One thing to consider when making your tiles is to be as modular as possible with the meshes. This way you can reuse common meshes instead of having to load them in multiple times. For example, if you have a crate in one of your tiles, and other tiles also have that crate, consider not building the individual tiles each with the crate as you are duplicating the mesh and texture set to load multiple times. Instead, use a placeholder for the crate position like an empty transform with the correct position/rotation/scale that you can instantiate your crate mesh on. Then you only load the crate and texture set once but you can use instances of the crate across your scene.
Additionally, you can also build out the set dressing on your tiles as a node geometry mesh. Here are a couple of examples of using node geometry to instantiate mesh parts to create a scene. In this case, we create a new single mesh containing all triangles and multi-materials instead of managing instances:
In terms of adding some variety to your scene, you can leverage custom shaders with node materials to add some variations to iterations of meshes you repeat in the scene.
Mostly, planning how you can reuse your assets and add flexibility to your scene goes a long way to maximize the time you spend in creating assets. And using placeholder null transforms or simple collider meshes positioned in Blender to simplify colliders over things like a pile of crates and still leverage reuse of meshes and textures.
If you have more specific questions about workflow, I am happy to answer but it is a very broad topic with so many solutions.
Thank you for this awesome idea, @PatrickRyan ! Would you know if it’s possible in Blender to export a glTF scene including placeholder transforms? When I export to glTF via Blender, it removes everything besides meshes. Also, the resulting meshes are “baked” into vertex data after exporting, and there seems no way to query their world rotation/position
@regna, you can absolutely export empty transforms into a glTF through Blender. I would suggest enabling Selected Objects under the Include section of the glTF export parameters and make sure you select all of your meshes and transforms. This will ensure that you get all of your nodes saved to glTF.
You can see I placed a couple of empty transforms in Blender and they are translating to a glTF I have loaded into the sandbox. You see the names of both empties in the scene explorer and I have one of them selected with the move gizmo enabled so you can see the position of the transform in scene.
The export should also keep the TRS data for all nodes on export so long as you aren’t baking them in Blender. For example, I added a few extra cubes with translation and rotation applied to the meshes and you can see that the TRS data remains in the glTF loaded into the sandbox:
There is definitely transformational data saved in the mesh as you can see in the inspector and on the gizmo. If the translation and rotation data were baked, the gizmo would be sitting at world zero, axis-aligned.
Again, check your export parameters to make sure you aren’t baking any of this data. These are my export parameters:
I didn’t have any modifiers in the scene so I didn’t need to apply them on export. This might be one place to look to see if your modifiers being applied is freezing the transformational data and zeroing everything out.
@regna, I just reread this thread and you mentioned above that you are running an “Apply All Transforms” to your meshes in Blender. This is why you have your meshes baked to the scene and can’t query their position. You won’t want to bake your transforms if you want to query their position or rotation. I could see where you may want to apply transforms on some of your meshes, but leave others unapplied so that you can get the transform’s parameters for specific meshes. Anything static may not need to retain transform data and in that case, applying it makes a lot of sense.