I plan to build an open-world game set in a large park nearby. I am going to model the park using available height map data.
Much of the environment will be generated, or simple textures, but there are a few physical objects I have modelled with video capture that I will incorporate to add realism and familiarity, like large rocks or tree stumps. The models have ragged edges that I would like to blend in to other meshes somehow, to make the transition from hyper-real to generated less jarring and more natural.
Is there a standard way to approach this? I don’t want to use fog, as that is too drastic an effect.
Hello and welcome to the Babylon community! That is a pretty interesting question
I think I’d look at post-processing for this: Using the Default Rendering Pipeline | Babylon.js Documentation (babylonjs.com). Maybe mixing a tiny bit of blur and bloom and grain to “mix” everything on the eyes of the viewer
@PatrickRyan might have good ideas too.
1 Like
Thanks for the ideas! It occurred to me that maybe there is a way to make the outer edges of the mesh 50% transparent?
That’s an interesting approach, I think you could try that too 
@drittich, if you want to chase down the blending of meshes by making the edges slightly transparent, you can simply set up a fresnel shader in NME that passes a very small fresnel mask multiplied by whatever value you want to use for your transparency - 0.5 in your example - and pass that into the alpha of the fragment out. However, this will really impact the draw calls of your scene as every mesh with this shader will need two draw calls, one to render the pixels behind the areas with alpha and the second to blend in the pixels with alpha on top of those background pixels. If you end up with several layers of transparency layered over one another, this can end up with a lot of extra draws.
The best way to approach mixing scanned assets with modeled assets is to take care of the jagged edges of scans though remeshing. This could be a more painstaking process, but it will always produce the best results that are not dependent on a specific placement and post-process. You also often end up with a much more manageable mesh in the end.
I spend quite a bit of time working on 3D scanning for Microsoft, and the best process was always to mesh a very large scan (several million triangles) and then take it to in an application like ZBrush to delete anything unwanted since you will end up with a relatively clean edge if there are a lot of triangles. Then you can duplicate and remesh to a game-res model and create UVs so that the vertex color can be baked to texture in the low-res mesh. For example, this is a scan of a life-size statue that used to be in the building I worked in:
You can see that I cleaned up the base to cleanly sit on any ground plane. But I was not able to get all the way around the statue, so I just removed faces that were unneeded:
This was a very quick and dirty scan with only a single 180-degree rotation pass at once height. If we passed a lot more poses into the solve, we could have gotten a better scan and not nearly as much noise. So I would suggest putting more effort into the scans to get them to resolve without the blending post process tricks as you can then open up more scenarios of moving them around or reusing them in other scenes. The effort you put in up front will pay dividends later.
2 Likes
Thanks @PatrickRyan - that’s amazing input, and very similar to what I’m trying to do. I can definitely get cleaner models, so I’ll go back and take some more video and try the workflow you suggest with Zbrush.
1 Like