Generating a heightmap from an array of meshes

I’m currently looking into to different ways to support physics at a large scale. One the best options I’ve tried so far involves using a heightmap in ammo js for the terrain / ground.

Most people, including myself might land on something like this

This works, but is also tedious to do many times during development.

I’ve written up some code that demo’s my first attempt at programmatically generating the heightmap. This approach uses raycasting in babylon js which seems to be incredibly slow, but still works. Right now it takes ~ 3 - 4 minutes to generate a heightmap with 100 x 100 subdivisions using this technique. I’m looking to find faster solutions, but in the mean time here’s my code:

Would love feedback from anyone with suggestions.


cc @Cedric

Do you have to generate the heightmap from the meshes ? I mean, can you generate the mesh from a heightmap image and stream it as well ?
If you have to stick with meshes, you can read the geometry and keep the Y value from the vertices position, instead of doing a raycast. This will work if you don’t have tunnels.
Also, why not using the mesh directly as a trimesh impostor?


Do you have to generate the heightmap from the meshes ? I mean, can you generate the mesh from a heightmap image and stream it as well ?"

Yes, I need to generate a heightmap from the original models mesh.

If you have to stick with meshes, you can read the geometry and keep the Y value from the vertices position, instead of doing a raycast

The reason I cannot use the y values from the geometry directly is because the vertices are not regularly spaced and therefore do not form a regular grid. That’s why I need to use a double nested for loop to traverse the grid space and sample a y-height at each location within the grid.

Also, why not using the mesh directly as a trimesh impostor?

I cannot do this because if I try to add the mesh directly to ammo js as either a convex hull imposter or a mesh imposter it will give me an OOM error well before the 200k+ calls to btTriangleMesh.addTriangle(triPoints[0], triPoints[1], triPoints[2]);
btConvexHullShape.addPoint(triPoints[0], true); btConvexHullShape.addPoint(triPoints[1], true); btConvexHullShape.addPoint(triPoints[2], true);
ever finishes.

It’s really annoying because at the same time I can load in a height map with 1480x1480 subdivisions and Ammo js is just fine about that because I’m passing in the data via Ammo._malloc. This is what’s driving me to find a way to turn any mesh or meshes into a heightmap.

Might be a little tricky but rendering the mesh to a float rendertarget with the Y as the output color, then reading it back to CPU and create the impostor.
It’s basically raycast…done on GPU :slight_smile:

rendering the mesh to a float rendertarget

Wut?? Can you elaborate a little more? If this really can be done on the GPU this would solve the problem!

Btw anyone at babylon know enough C++ to expose the btTriangleIndexVertexArray constructor?

This would give an amazing performance boost in loading in a meshes vertices to Ammo js, since they would no longer need to be passed individually and could instead be passed all at once by refence to a buffer.

It’s more or less what’s being done:

Vertices need to be merged/converted but creation is 1 time call.

        if (this.onCreateCustomMeshImpostor) {
            returnValue = this.onCreateCustomMeshImpostor(impostor);
        } else {
            const tetraMesh = new this.bjsAMMO.btTriangleMesh();
            const triangeCount = this._addMeshVerts(tetraMesh, object, object);
            if (triangeCount == 0) {
                returnValue = new this.bjsAMMO.btCompoundShape();
            } else {
                returnValue = new this.bjsAMMO.btBvhTriangleMeshShape(tetraMesh);

The line:
const triangeCount = this._addMeshVerts(tetraMesh, object, object);
is going to reach this point in the code:

for (let i = 0; i < faceCount; i++) {
                const triPoints = [];
                for (let point = 0; point < 3; point++) {
                    let v = new Vector3(
                        vertexPositions[indices[i * 3 + point] * 3 + 0],
                        vertexPositions[indices[i * 3 + point] * 3 + 1],
                        vertexPositions[indices[i * 3 + point] * 3 + 2]

                    v = Vector3.TransformCoordinates(v, localMatrix);

                    let vec: any;
                    if (point == 0) {
                        vec = this._tmpAmmoVectorA;
                    } else if (point == 1) {
                        vec = this._tmpAmmoVectorB;
                    } else {
                        vec = this._tmpAmmoVectorC;
                    vec.setValue(v.x, v.y, v.z);

                btTriangleMesh.addTriangle(triPoints[0], triPoints[1], triPoints[2]);

This is the bottleneck and also the source of the OOM error.

It might be possible to improve things a bit.

pre allocate vertices/indices and use the same code for adding face. No reallocation so maybe no OOM.
Thing is, it’s not available in ammo.js/ammo.idl at main · kripken/ammo.js · GitHub
So, you have to do a local build of ammo, test it, if it’s fine, do a pr on ammo repo, push the build to babylon.js, do a pr in babylon.
It’s not that hard. I can help you.

couldn’t you just place the camera above the meshes then grab the depth texture?

1 Like

It’s not that hard. I can help you.

Yes, please - I would love to do this and get this moving forward. I started by reading about IDL’s yesterday.

I’m not sure I understand how pre-allocating the memory for the number of verts would fix the OOM error, but I’m willing to try. In order to expose something from the idl all we have to do is provide the correct interface?

Btw, check this issue out:

Someone is asking why we can’t pass the geometry directly into ammo js, and this appears to have to do with not having a method that can take a buffer of vertices or rather a pointer to one.

So you’re saying I can do this:

camera.position.y = max_height_in_scene
camera.position.x = center_of_scene_x
camera.position.z = center_of_scene_z

var renderer = scene.enableDepthRenderer();
var mat = new BABYLON.StandardMaterial("mat01", scene);
mat.emissiveTexture = renderer.getDepthMap();

And simply read all the resulting depth values from the material, mat?

I mean you will have to convert the values afterwards but yeah why not?

Basically would make sure that the depth values get converted to the range you need and it should work.

Not sure what you are using the heightmap for I did not read that much into it, but this is how I would generate a height buffer from a bunch of meshes quickly.

I added the quick conversion for you just now on a updated pg, I think this is correct but I might be wrong.

1 Like

Okay, gonna try this out.

The reason I’m doing this is so that I can generate a heightmap for use in ammo js. In it’s present state ammo js can’t be used for physics in large scenes with the exception of the heightmap shape which is the only shape that can load in vertices all at once from a buffer.

Also, I’m not sure if you guys are aware, but unity seems to be the only tool that can readily create a height map from a scene. Even blender requires some involved setup (there’s youtube tutorials for it). If this were a first class feature in babylon js, that would be huge!


Okay, newb question, is this how I actually read the depth values of the height buffer?

mat.emissiveTexture.readPixels().then((data) => window.tData = data)
for (let i = 0; i < tData.length; i++) {
  console.log('height at pixel ', i, ' : ', tData[i])

z values are divided by w…in other term, z values are not in linear space. so you 'll have to convert it back. transform x,y,z,w by the inverse of the projection matrix is my math is correct. x,y are clip space [-1…1], z is depth, w is 1

Right, it’s a grid, and I have to account for that to calculate what the actual z index would be

Uh, give a second to digest your edit.

I’m so new to using the texture class that I wasn’t sure if what’s returned by .readPixels() was what I’m after or not

I forgot: use an orthographic camera. and yes readpixels is the way to go.

1 Like

When you mention transforming the x,y,z,w values by the inverse of the projection matrix, you are referring to camera._projectionMatrix, right?

Also I set the camera mode like so

1 Like
mat.emissiveTexture.readPixels().then((data) => window.tData = data)

tData.length // → 1048576
tData.length ** 0.5 // → 1024

So my clip space is a 1024 x 1024 plane centered on the camera, looking straight down?