Creating uv map from vertices in camera viewport

Hello everyone,

I have some ideas for which I want to use dynamic textures. To achieve what I want I need to create a uv map from the (visible) vertices on the screen. On the plane the geometry gets close to the object in the viewport but the uv map doesn’t wrap the object correctly.

If you rotate the object in this pg ( the uv map get updates automatically. As you can see the dots are not at the correct positions. I want them to be exactly at every vertex. The perspective of the “cube” on the plane is also a little bit different to the cube in the viewport.

Please give me a hint what’s wrong here.


I have no idea what you are trying to achieve here :frowning: could you provide just a bit more info about the algo or process you are trying ? this might help others as well to troubleshoot it?

My goal is to place a texture on the object that behaves like an overlay regardless of the initial UV map and is displayed at the correct position. The texture is generated by several pickedPoints of the object, so it is not just an image. The problem is that with a static UV map the texture is not displayed correctly at the edges.

Since the object will always be displayed as a 2D graphic in the end, I thought I would take the coordinates of the vertices in the canvas depending on the camera perspective and use them for the UV map. Non-visible faces are not relevant for my texture and should actually be neglected and not displayed on the dynamicTexture.

The canvas is only used here to demonstrate the UV map. Actually all points on the cube should be exactly on the corners/vertices and not distort. But I realized that with this approach the distortion will always occur.

I am open for other approaches to place a 2D dynamicTexture exactly on the model without distortion.

I am running out of ideas here as without unwrapping I am not sure how this could be achieved.

Maybe @PatrickRyan would have an idea with a custom shader/postprocess ?

I would go with a dynamic unwrapping function. I tried that with setting the uv coordinates to the canvas layerX and layerY of each vertex. If my function can be tweaked so that the faces are flat on the up map the distortion in 3d view should be gone. Is my idea possible or am I completely wrong?

Can’t you create the uv coords in Blender and import the uv-mapped cube in Babylon?

No, this should be done without prior unwrapping. And static unwrapping doesn’t work for me. The main issue is if an area of the 3D surface is split in the uv map, the discontinuity becomes visible when drawing across the seam.

This problem should be solved if I uv map the visible faces of the object according to their pixel positions on the screen. If the distortion could be handled the discontinuity won’t be a problem anymore if the uv map gets updated on every camera movement.

I added a texture checker to the cube. You can clearly see the distortion.

I found a pg you created, @sebavan. It shows the result I am looking for. It should look like this just without a plane and adjusted alpha but dynamically recalculating the uv coordinates.

@samevision, when you are talking about projecting textures that ignore the Mesh UV space, on of the techniques that comes to mind is triplanar projection. I had another user ask about this so I put together this playground with a custom node material shader to do just that. This will get you a step closer, but you will need to drill into how you are creating your dynamic texture. If nothing else, this will give you some ideas about being able to project a texture while ignoring the UV space.

Thanks, @PatrickRyan! Good to see that there are different solutions. I will play around with it, let‘s see how far I can go. But I think it will be much more complicated to create the overlay with a shader like this. Actually I‘m fine with using uv maps. I only want to be independent from preparing a uv with third party software.

I am still convinced that it should be possible to update the uv according to the viewport and camera perspective. I found a code snippet of a threejs example which tries the same thing. It is not perfect but it gets closer to the overlay than my approach:

for ( var i = 0; i < convex.geometry.faceVertexUvs[0].length; i++ ) {

          v0.copy( geo.vertices[ geo.faces[ i ].a ] );
    v0.applyMatrix4( convex.matrixWorld ).project( camera );
            v0.divideScalar( 2 ).addScalar( 0.5 );
            geo.faceVertexUvs[0][i][0].set( v0.x, v0.y );

          v0.copy( geo.vertices[ geo.faces[ i ].b ] );
            v0.applyMatrix4( convex.matrixWorld ).project( camera );
            v0.divideScalar( 2 ).addScalar( 0.5 );
            geo.faceVertexUvs[0][i][1].set( v0.x, v0.y );

          v0.copy( geo.vertices[ geo.faces[ i ].c ] );
            v0.applyMatrix4( convex.matrixWorld ).project( camera );
            v0.divideScalar( 2 ).addScalar( 0.5 );
            geo.faceVertexUvs[0][i][2].set( v0.x, v0.y );
geo.uvsNeedUpdate = true;

The technique is also to project each vector of the object and update the uvs on every frame or camera movement. I am not sure how to adapt this to get rid of the distortion:

function updateUV(mesh, scene, camera, engine) {
    const vertices = getVertices(mesh)

    let xs = []
    let ys = []
    vertices.forEach((v, i) => {
        const uv = getScreenCoords(v, scene, camera, engine)
        if (i === 0) console.log(uv)
        xs = [...xs, uv.x]
        ys = [...ys, uv.y]

    const xMin = Math.min(...xs)
    const yMin = Math.min(...ys)
    xs = => x-xMin)
    ys = => y-yMin)
    let uvs = []
    for (let i = 0; i < xs.length; i++) uvs = [...uvs, xs[i], ys[i]]

    const max = Math.max(...uvs)
    uvs = => uv/max)

    mesh.setVerticesData(BABYLON.VertexBuffer.UVKind, uvs)

    return uvs

function getScreenCoords(vector, scene, camera, engine) {
    return BABYLON.Vector3.Project(vector, BABYLON.Matrix.Identity(), scene.getTransformMatrix(), camera.viewport.toGlobal(engine.getRenderWidth(true), engine.getRenderHeight(true)))