Dream textures, stable diffusion wraps in babylon?

Anyone understand how this works in blender?

How does he generate a texture to wrap the mesh exactly? Think we can do something similar in babylon?

Hmmm I didnā€™t know this project :open_mouth: I read the dream-textures/TEXTURE_PROJECTION.md at main Ā· carson-katri/dream-textures (github.com) docs, and saw this video: (4) BSLIVE Blender Dream Texture Update - Depth To Image Projection - YouTube to see whatā€™s doing, and itā€™s basically this:

1 - Generate the stable diffusion image from the prompt and extract depth information from the image (it mentions that there are a few depth models)
2 - Generate a UV projection of the selected geometry that matches the perspective of the generated image, based on that depth information. You can see the generated UVs here and how they try to match the perspective of the building:

Thatā€™s very much doable in Babylon, with a lot of ~mathz~ :rofl: itā€™s an interesting idea :slight_smile: @PirateJC might be interested as he was thinking about AI and Babylon these weeks

2 Likes

Hm, what i see in this video you posted is more how iā€™d expected it to workā€¦ like a random projection of a generated image on some geometry. But in these demos by the creator, somehow heā€™s getting Stable Diffusion to create an image that matches the geometry of the model. Iā€™m assuming heā€™s using the model itself to prompt SD? Need to look into that code when get a moment.

But see awesome demos here and on his twitter feed:

I thought this YT vid is decent explanation Dream Textures - New Blender A.I Tool For All! - YouTube

Ah, hereā€™s the diffusers pipeline. Got to study this! Carson says:
Itā€™s very similar to the inpainting model, except instead of concatenating the latents, mask, and masked image latents, you concatenate the latents and a depth map.
But still, i wonder why it works great in Carsonā€™s demo, but doesnā€™t seem to be matching the geometry in other demos. I got to imagine itā€™s more than a depth map - b/c depth map wouldnā€™t be able to specify ā€œRender this specific part as a Window and this other part as a Wallā€, right?

Ah some more explanation ā€˜It works by using the scene as an initial image, then projects generated images from several views back onto the object, assigning a separate material for each angle.ā€™

1 Like