Hello Babylon.js community,
I recently came across an interesting GitHub repository called FaceCap (GitHub - imerso/facecap: Babylon.js + Mediapipe face capture) that allows capturing a user’s face and generating a facemesh representation. I used FaceCap to capture the user’s face and obtained a facemesh.
Now, I’m facing challenges in applying this facemesh, representing the user’s face, onto the face mesh of a human GLB model in Babylon.js. I attempted to replace the meshes, but I encountered gaps between the head and the face mesh, and the size of the model varied based on the distance of the user from the camera.
I’m seeking guidance from the community on how to seamlessly integrate the facemesh onto the human GLB model’s face mesh. Specifically, I need to address the issues of the gaps between the head and the face mesh and ensure consistent scaling regardless of the user’s distance from the camera.
Any insights, suggestions, or techniques to achieve a realistic and accurate application of the captured user’s face onto the human GLB model would be highly appreciated.
Thank you in advance for your support and expertise!