Is it possible to bake / distort texture from uv to uv2?

I have a mesh with two UV different maps. I load a texture to uv and would like to bake it to uv2 and to save the newly distorted texture to a file.

Are there any existing methods of doing this within the framework? Thank you

A more precise description should help.

I load a texture to uv and would like to bake it to uv2 and to save the newly distorted texture to a file.

How can you load a texture to a uv? You mean use the mesh’s UV set 1? Than switch to UV set 2 and save (what?)

what type?

1 Like

I have reworded my question, I hope it is clearer now.

I have a mesh with two different UV maps. The first UV map (UV1) is designed to take a seamless texture, meaning it maps the texture coordinates in a way that the texture can tile seamlessly across the surface of the mesh without visible seams. The second UV map (UV2) is created using a method similar to Blender 3D’s Smart UV Project. This technique automatically unwraps the mesh to generate a UV map that minimizes distortion and overlapping, but it doesn’t necessarily produce a seamless mapping.

My goal is to load a seamless texture into the first UV map (UV1), then bake this texture onto the second UV map (UV2). After baking, the texture will be distorted to fit UV2’s layout. Finally, I want to save this newly distorted texture to a bitmap.

How could any algorithm know which particular part of the texture do you want to map to a given face?

That’s why Blender has the UV Editor. You can use Smart unwrap (there are better approaches) to unwrap the surface of the mesh. Add the texture to the UV Editor and edit the UV islands by transforming them onto the right place, size, rotation so they cover the correct part of the texture.

This is technically possible, but requires a little knowledge of shaders and render targets.

  1. Use the render target and set the camera of the render target to be orthogonal, and set it to render only once.
  2. Set the camera range to -1 to 1 (left, top, bottom, right).
  3. Create a shader that uses uv1 as the coordinate of the map, uv2 as the coordinate of the vertices, and remap to -1~1.
  4. Wait for the scene to be ready, then refresh the render count of the render target.
  5. wait for the scene to be onAfterRender
  6. Now the render target is what you need, you can download it or use it as a texture in the material.

Translated with DeepL Translate: The world's most accurate translator (free version)

I also know technically how to accomplish it. The question is how will this create a meaningful result.

I’m not a native English speaker, so my explanation may be a bit unclear.
Using uv2 as vertex coordinates actually flattens the model into a square face. Then the camera is used to “bake” the rendered result. This results in a meaningful image

1 Like

Im this case uv1 must be properly mapped to the texture. If that’s the case why bother with creating the texture for uv2 in Babylonjs? That’s why I think uv1 is not mapped correctly.

@babbleon can you give us more abstract information what do you want to achieve? Thanks!

You’re right, there’s usually no need to bake a uv2 if uv1 is already working properly. I think he’s probably doing some kind of editor, like baking ao or lightmaps.

1 Like

A simple demo, showing the effect of this uv-expansion. The rest of the details are for you to explore.

1 Like

Original texture:
UV1:

UV2:

Baked texture from babylon.js:

Needs some tuning though :wink:

Yes! This is precisely what I was looking for. Thank you very much for you help!

1 Like

One reason to do this kind of remapping, is to remove a dependency on KHR_texture_transform extension (for repeating a texture across the model) by baking the repeat into a non-repeating “atlas” uv layout (all uv within the 0-1 space, with no overlaps).

I don’t know if this is why the OP is using this method, but I’ve done so (with other applications) for this specific reason.

1 Like

This makes a lot of sense. Thanks for sharing!