Anyone understand how this works in blender?
How does he generate a texture to wrap the mesh exactly? Think we can do something similar in babylon?
Anyone understand how this works in blender?
How does he generate a texture to wrap the mesh exactly? Think we can do something similar in babylon?
Hmmm I didnāt know this project I read the dream-textures/TEXTURE_PROJECTION.md at main Ā· carson-katri/dream-textures (github.com) docs, and saw this video: (4) BSLIVE Blender Dream Texture Update - Depth To Image Projection - YouTube to see whatās doing, and itās basically this:
1 - Generate the stable diffusion image from the prompt and extract depth information from the image (it mentions that there are a few depth models)
2 - Generate a UV projection of the selected geometry that matches the perspective of the generated image, based on that depth information. You can see the generated UVs here and how they try to match the perspective of the building:
Thatās very much doable in Babylon, with a lot of ~mathz~ itās an interesting idea @PirateJC might be interested as he was thinking about AI and Babylon these weeks
Hm, what i see in this video you posted is more how iād expected it to workā¦ like a random projection of a generated image on some geometry. But in these demos by the creator, somehow heās getting Stable Diffusion to create an image that matches the geometry of the model. Iām assuming heās using the model itself to prompt SD? Need to look into that code when get a moment.
But see awesome demos here and on his twitter feed:
I thought this YT vid is decent explanation Dream Textures - New Blender A.I Tool For All! - YouTube
Ah, hereās the diffusers pipeline. Got to study this! Carson says:
Itās very similar to the inpainting model, except instead of concatenating the latents, mask, and masked image latents, you concatenate the latents and a depth map.
But still, i wonder why it works great in Carsonās demo, but doesnāt seem to be matching the geometry in other demos. I got to imagine itās more than a depth map - b/c depth map wouldnāt be able to specify āRender this specific part as a Window and this other part as a Wallā, right?
Ah some more explanation āIt works by using the scene as an initial image, then projects generated images from several views back onto the object, assigning a separate material for each angle.ā