Compute Normals seems not correct

The mesh using normals from Vertexdata.ComputeNormals looks weird, any wrong with it?


From my point of view it looks nice (probably because I don’t know how it should look like).
Could you kindly provide the reference of the desired look?

I think the issue here is with the quality of the triangles. The normal of a vertex is computed from the triangles that contain that vertex, and having triangles that are too “stretched” leads to bad quality normals:

The same mesh shape can be achieved in a variety of ways combining vertices, edges and faces, and those different ways are the possible “topologies”. Some topologies are better than others, like this example in Blender:

Why Do We Need Topology in 3D Modeling (

1 Like

When triangular facets share vertices by having the same index the normal for the vertex is the average of facet normals, see Vertex Normals | Babylon.js Documentation

Converting to a flat shaded mesh gives a different look to the mesh which maybe what you need.


Maybe this old explanation can help to understand :


BTW, it’s impressive that 6 yo PG examples from the former forum still work with the current version of BJS
Proven really effective backward compatibility ! :slightly_smiling_face:


Hi all,

To make it more clearly, I change the light direction to Zero to compare the looks.

This is what I need:

And this looks not good:

Why I have this issue because I 'd like to merge all my meshes into a SPS, the SPS must provide the normals to build the mesh (If remove line 29 in below PG, errors happend, It is a bug?)

@jerome @labris


Why SPS? SPS is for repeating one mesh many times not for one mesh.

Not a bug, SPS calculates its normals from the supplied mesh normals.

My scene has huge meshes, I’m trying to use SPS to improve the performance. When I add all the meshes into the SPS, it looks very different from the original, that’s why I think the normals cause this issue.

Yes man!! :smiley:

If the problem is that the meshes are too big, I don’t think adding them all to an SPS will help because that will create an even bigger mesh…

It might help thou to go the opposite direction and break each mesh into smaller meshes, that way the parts that are offscreen aren’t rendered as much.

Check this out for optimizing your scene: Optimizing Your Scene | Babylon.js Documentation (

1 Like

The documentation on Optimizing Your Scene has a section for how to optimize a “Scene with large number of meshes”, but it would be good to also have a section for how to optimize a “Scene with large meshes”. IDK enough about it to volunteer to add it myself thou… :slightly_smiling_face:

1 Like

One needs to define more exactly what a ‘large mesh’ is actually.
Is it one inseparable mesh, or it is a group of meshes, or it is just too big for a camera to fit to the view.
There could be different approaches in different cases of ‘large meshes’.
But in all cases the first question is “What should user to experience at any given moment and how to achieve this with minimal losses” :slight_smile:

Hi all,

Sorry not described clearly, actually I has huge number of meshes, I’m trying SPS to reduce the drawcall.

I’d like to come back to the normals:
In this PG, I removed the normals of vertexData in line 29, but it looks fine.
So I wonder where does BABYLON compute the normals in this case?
A mesh without normals should be black, right? @Deltakosh @jerome


If no normals data are provided they are computed by default in the shader for you like this:

Hi @sebavan ,
You hit my original concern:
As the normals computed in the shader (PG4) and by VertexData.ComputeNormals (PG5), looks so different
That’s why I doubt the normals seems not correct


@Freeman when you call

BABYLON.VertexData.ComputeNormals(positions, indices, normals);

and set the mesh normals using

vertexData.normals = normals;

the function calculates the normals based on the algorithm as descibed in earlier posts and the shader uses these values to render the mesh.

Should you bypass these steps the shader uses its own (different) alogorithm to calculate the normals.

The ComputeNormals function has access to all facet triangles that share a particular vertex index and so calculates the average of the shared facet normals and applies this to the vertex.

A shader can only access one facet triangle at a time so a shared vertex normal will end up being that of the last facet accessed and these normals used by the renderer are not stored by the mesh in the CPU.

Now because of the way the SPS is built the algorithm used expects to find the normals stored in the passed mesh,


finds non existant normals at the step where it needs them and so produces an error long before it gets to calling the shader.

One way to solve your problem is, as I said, to use convertToFlatShadedMesh However as this increases the number of indices and maybe the number of vertices there is another way which is to correct your mesh data - coming next post.

1 Like


When you look at the data in your positions array some z values are 12.5 and some 12.3 the difference though minor is enough to produce different average normals at different vertices. The difference between these averaged normals is enough to produce the different rendering for different facet triangles.

By making all the z values 12.5 your mesh now lies completely in the XY plane and so all normals for all facets are along the Z axis and so are their averages.

1 Like