@eltitano, I’m glad you found a work around for your issue, though I don’t quite understand what you mean about clamp mode for Babylon. Basically this works the same for any DCC tool where any uv coordinate assigned outside 0-1 space will retrieve its pixel based on the wrap mode of the texture. If the texture is set to wrap, it will get the same pixel as if it was in 0-1 space and if set to clamp in U or V it will receive the closest edge pixel from 0-1 space based on where it is located. I’m not quite following what was different in Blender here.
For the question about the assignment per face, from the artist standpoint it does not look any different, but from the engine perspective it is very different. We also have to take into account what is doing the rendering for the scene. DCC tools can carry very heavy shaders if they are rendering with any type of ray tracer the heavier shader can compensate for a longer render time. With a real time renderer, we need to stay at 60 fps for mobile/desktop experiences and higher for HMDs (90 fps for most).
So let’s say we want to enable materials by face, what does that mean in terms of file storage and requirements in the shader. Here are the first hurdles we will encounter:
- We will have to carry another vertex array in the json file that denotes which material is assigned per vertex
- The glTF file size could grow quite large as you could assign a different material per face in a mesh where right now we have the limitation that the size of the material object will never be larger than the number of meshes in the file. Granted, you can blow this up by saving many meshes into a file, but this is a poorly optimized asset. With the unlimited materials per mesh method, your asset could be very optimized, but we still need to carry all of the material definitions in the json file.
- The shader will also have to calculate each of the results from each texture for each channel in the lighting path for each material before we decide which pixel to pass along to the lighting calculation for every pixel we render.
That last one can be a little confusing so let me share with you an experiment we did when deciding if UDIM texturing was something we wanted to support, which is very similar to material per face assignment. This is the playground and shader:
UDIM prototype
What is happening here is that I am testing where the UVs are (0-1, 1-2, 2-3) and then assigning an image based on location. We still have to read all three textures (for base color, ORM and normal) and then decide which texture is the one passed to the lighting node for every pixel we render. So this operation becomes expensive because we have to make this determination for all three texture sets for every visible pixel, 60 times per second. When we talk about the ability to do a per face material assignment, the shader would have to be able to handle N texture sets which means we would quickly see impact to the performance of that shader as more materials are added. And this capability would have to be added to every shader in the engine since you may want to assign a PBR shader, a blinn/phong shader, or an unlit shader to a mesh like this and we have no way to know what the user will want.
An additional wrinkle here would be if the assignment needed to be a hard cut per face. So when rendering, if we knew one vertex was assigned to one material and the neighbor was assigned to another, how to we render pixels in between. In vertex color, any colors between vertices are blended, but in the case that a material needed a cut by face edge, we would also have to test every pixel to know which face it resides in so we can step between materials. All of these extra calculations per pixel per frame add up quickly.
And for the array question, all UVs from a mesh are stored as single precision (32-bit) floating point numbers. They will be stored as coordinate pairs in consecutive indices in the array. For example, if you look at the vertex list in your mesh, the first vertex will have a coordinated position in the UV array, so in this case, the first two indices in the array. We iterate through the UV array reading pairs of coordinates as floating point values and that is how we know what coordinates on the texture to return as color in the shader. So each time we read the next UV pair, we don’t move to the next index, we move two indices to start the next pair.
I hope this all helps answer your questions, but please let me know if more questions sprint to mind about this.