How to realize layered occlusion rendering of model and simulate dressing effect?

If multiple model files (each model file consists of multiple meshes) are loaded into the scene, can you set different levels, for example, model a is level 1, model B is level 2, and then model a is wrapped by model B, which is equivalent to model B as the outer layer of model a. model a can only be rendered inside model B, and the excess part will not be rendered. It was originally intended to adjust the vertex coordinates by traversing the low-level model vertices and emitting rays outward to obtain the intersection with the high-level model. Because there are too many model vertices here, it is very time-consuming.

Hello and welcome to the Babylon community!
Can you describe what you want to do in more detailed steps?

For example, I have a shirt model loaded, and then a vest model is loaded. In principle, the vest model is relatively small, so the part of the model that my shirt is wrapped by the vest needs to be blocked or the vertices of this part need to be automatically indented to prevent part of the shirt model from being displayed outside the vest.



1 Like

Wow. You understand this is against rules, yes? Like in normal life, the shirt is smaller and under the vest, yes? Still, you could try twist it using things like ‘layerMask’ and/or ‘renderingGroupId’. There would be other techniques with boundingBox, displacing vertices, using shaders on materials and probably others… but they all have an impact on performance and handling. So, I guess my best advise would be to make it right from the start == edit the clothing in your 3D editor so that the shirt is actually under the vest (like it should be).

Because I want to realize that the outer model can serve as the boundary of the inner model to meet the function that I can wear clothes at will.

When I just want to see the shirt, the effect I see will not be the look after being wrapped by the vest, but natural and fluffy. In this way, I can wear a suit or other clothes in the future. In addition, I can switch the process of clothes here. If the model needs to be redone because of its size, the model needs to be made a lot.

I understand. Makes sense. To be honest, I dont’ know how to do it properly. So that the shirt still shows where there is no vest above. Might be quite complex to implement. Let me call in some people who might know of a best solution to do that @carolhmj @PolygonalSun @Cedric . Could you Guys have a look at the above :pleading_face: and propose your automagical :magic_wand: solution for it? :wink: thx in advance, appreciated :smiley:

I would try to tweak zoffset

It’s part of the material so, you can change it based on your model selection.

1 Like

The attempt is still invalid, because the protruding part of the vertex will still be displayed normally, which is equivalent to that when looking at the front, the model of the waist part of the shirt is larger than that of the vest, so it will still protrude.

Yes, which is why I happily handed the issue over to others and also why I’m saying “it’s against rules”. It’s not just blending in. The vest for a part really is under the shirt. May be you will need to find some alternate way to fix it. May be simply play it with the gaps of the human eye. Admit that when importing a vest, you would just make your shirt model scale at 0.99 or 0.98 or something. If the inner arm parts of the vest are large enough, it might just work.

I wouldn’t discard the ray casting technique too quickly… but you could modify it to not cast a ray on every vertex… only cast the ray if it’s far enough away. For every other vertex, LERP between the values of the nearest casts. It’ll be much faster and depending on the, distances between cast points you choose, probably meet your needs. Only testing will tell. In addition to discarding if “too close” to an already tested vertex, you could also compare normals and also test if they are angled different.

The other method would be to try weighted skeletons.