Very small gaps between edges

Hi,

i have some trouble with a model imported from a gltf file. Originally it is one simple model with 3 different materials on specific faces. After import into babylon it is decomposed into 3 different meshes which results in very small gaps between where the backgorund color of the scene is visible. At first i thought it is a problem with normals or the uv maps or the texture assigned to the materials because it looks like a texture seam, but the only way to fix this is to extrude the faces of that parts of the model which have the same materials so they overlap each other and avoid those gaps. Is there a way to keep the model as one single mesh after import, or is it a gltf specific limitation? Merging the meshes is not a solution because the gaps are already produced.

It is not specific to GLTF but in general different materials might result in different draw calls. This should nevertheless not create any gaps between them. Could you share a repro we might have a look into ?

I was afraid you’d say that :slight_smile:

https://playground.babylonjs.com/#LF4ZNT

I hope there are no problems with loading of the external asset.

When you look around, you will see very small but noticeable small gaps through which the background shines. Originally the whole thing was a single mesh with different materials.

This means the gap is probably in the source model as well and the way you sliced it is probably the issue.

@PatrickRyan can help with that :slight_smile: but what tools did you use ?

@eltitano, both glTF and Babylon have a limitation to one material per mesh. Even though it is allowed in DCC tools to be able to assign materials per face in a mesh, this is a very resource heavy requirement for webGL and particularly mobile rendering outside the target performance. So best practice is to only use one material per mesh and split up your meshes before exporting to make sure they will render the way you want. In some cases, like yours, you may end up with some rendering errors if you are trying to snap coplanar meshes side by side to feel like one continuous mesh. In that case there are a couple of other things you can do.

One is to ask yourself why you are splitting materials? Is it just to change color/roughness/metallic which could be handled with textures, or it is to change blending modes (one is glass and the other opaque)? You could also have a limitation on texture size where to stay within the texture size limitation for a large asset you need to break up the textures into several on a large mesh to retain your target texel density.

The only place where you absolutely need multiple materials is to change blending mode, which is cheaper and more accurate than making the whole mesh alpha blended. However the rest can be solved with a custom node material. You can pass multiple texture sets to one mesh and use parameters like UV position or vertex color to decide what texture you are passing. It sounds like this would be the approach for you in this case. The mesh remains one contiguous mesh and you are able to apply different material parameters based on parameters of each vertex.

3 Likes

Instead of using multiple materials, I now use multiple uv maps and it works perfectly. Thanks for helping me to do it the right way.

Sidenote:
I just had to be careful to move those faces which shouldn’t be affected outside of the 0 and 1 range (for example, top right, just right next to it is not enough). This is the only way that the clamp address mode works in babylon. Another small difference between babylon and blender. Possibly blender simply ignores all points as soon as one of the coordinates lies outside of 0 and 1.

If you have another second for me: What I don’t quite understand is why it is too resource heavy to use several materials per mesh. In the end, I don’t do anything different than before: I give certain faces a texture. I also dont get it how the Float32Array works as UV map. Is one entry assigned to one vertex? Or one half of the array interpreted as U and the other as V?

@eltitano, I’m glad you found a work around for your issue, though I don’t quite understand what you mean about clamp mode for Babylon. Basically this works the same for any DCC tool where any uv coordinate assigned outside 0-1 space will retrieve its pixel based on the wrap mode of the texture. If the texture is set to wrap, it will get the same pixel as if it was in 0-1 space and if set to clamp in U or V it will receive the closest edge pixel from 0-1 space based on where it is located. I’m not quite following what was different in Blender here.

For the question about the assignment per face, from the artist standpoint it does not look any different, but from the engine perspective it is very different. We also have to take into account what is doing the rendering for the scene. DCC tools can carry very heavy shaders if they are rendering with any type of ray tracer the heavier shader can compensate for a longer render time. With a real time renderer, we need to stay at 60 fps for mobile/desktop experiences and higher for HMDs (90 fps for most).

So let’s say we want to enable materials by face, what does that mean in terms of file storage and requirements in the shader. Here are the first hurdles we will encounter:

  • We will have to carry another vertex array in the json file that denotes which material is assigned per vertex
  • The glTF file size could grow quite large as you could assign a different material per face in a mesh where right now we have the limitation that the size of the material object will never be larger than the number of meshes in the file. Granted, you can blow this up by saving many meshes into a file, but this is a poorly optimized asset. With the unlimited materials per mesh method, your asset could be very optimized, but we still need to carry all of the material definitions in the json file.
  • The shader will also have to calculate each of the results from each texture for each channel in the lighting path for each material before we decide which pixel to pass along to the lighting calculation for every pixel we render.

That last one can be a little confusing so let me share with you an experiment we did when deciding if UDIM texturing was something we wanted to support, which is very similar to material per face assignment. This is the playground and shader:

UDIM prototype

What is happening here is that I am testing where the UVs are (0-1, 1-2, 2-3) and then assigning an image based on location. We still have to read all three textures (for base color, ORM and normal) and then decide which texture is the one passed to the lighting node for every pixel we render. So this operation becomes expensive because we have to make this determination for all three texture sets for every visible pixel, 60 times per second. When we talk about the ability to do a per face material assignment, the shader would have to be able to handle N texture sets which means we would quickly see impact to the performance of that shader as more materials are added. And this capability would have to be added to every shader in the engine since you may want to assign a PBR shader, a blinn/phong shader, or an unlit shader to a mesh like this and we have no way to know what the user will want.

An additional wrinkle here would be if the assignment needed to be a hard cut per face. So when rendering, if we knew one vertex was assigned to one material and the neighbor was assigned to another, how to we render pixels in between. In vertex color, any colors between vertices are blended, but in the case that a material needed a cut by face edge, we would also have to test every pixel to know which face it resides in so we can step between materials. All of these extra calculations per pixel per frame add up quickly.

And for the array question, all UVs from a mesh are stored as single precision (32-bit) floating point numbers. They will be stored as coordinate pairs in consecutive indices in the array. For example, if you look at the vertex list in your mesh, the first vertex will have a coordinated position in the UV array, so in this case, the first two indices in the array. We iterate through the UV array reading pairs of coordinates as floating point values and that is how we know what coordinates on the texture to return as color in the shader. So each time we read the next UV pair, we don’t move to the next index, we move two indices to start the next pair.

I hope this all helps answer your questions, but please let me know if more questions sprint to mind about this.

I do not have the time to read this very long thread. I can say that mesh can have more than one material. that is when sub-meshes are used. Looks like Patrick corrected himself in a later post.

Also, when I encounter a Blender mesh with more than one material, I double/triple up all the vertices where there is a material edge. The reason is sub-meshes MUST have contiguous vertices, so a vertex cannot be in more than one. One consequence of duplicating vertex sharing multiple materials is, I have never seen any gaps.

If I am way off base, please disregard.

@JCPalmer in this thread, when I am speaking of a mesh, I am referring to a single mesh primitive in this sense:

image
image

This definition is taken from the glTF 2.0 specification since this is the path the asset is being loaded in.

In the case of the SubMesh and MultiMaterial classes in Babylon, you can certainly convert these in code, but there is no way to tag vertices when passing through glTF to beak up the mesh in to SubMeshes to assign MultiMaterial IDs. This would be a manual code step which would be a bit of trial and error as SubMeshes need to be split by vertex ID which is why in this example you see an uneven break in the materials. Further, MultiMaterial only works with standard Materials, so your PBRMaterials that were imported by the glTF loader need to be converted to standard and you lose any image-based lighting from the scene.

So while you can certainly create SubMeshes of an imported mesh, you can’t use the glTF format to retain any breaks from your DCC tool and requiring your file to be separated into SubMeshes by code limits your use of that file as glTF file which is supposed to be renderer agnostic.

Ok. One thing, you can use submeshes and PBR. I do it all the time.

Also, now that I think about, at least with the Blender glTF exporter, I am pretty sure they are doubling up their overlapping vertices across materials. They break materials of a mesh into separate meshes, but I know the Blender API really well, and it would be difficult to keep track when a vertex was previously used in a prior material / mesh & exclude it for this mesh.