Dream textures, stable diffusion wraps in babylon?

Anyone understand how this works in blender?

How does he generate a texture to wrap the mesh exactly? Think we can do something similar in babylon?

Hmmm I didn’t know this project :open_mouth: I read the dream-textures/TEXTURE_PROJECTION.md at main Ā· carson-katri/dream-textures (github.com) docs, and saw this video: (4) BSLIVE Blender Dream Texture Update - Depth To Image Projection - YouTube to see what’s doing, and it’s basically this:

1 - Generate the stable diffusion image from the prompt and extract depth information from the image (it mentions that there are a few depth models)
2 - Generate a UV projection of the selected geometry that matches the perspective of the generated image, based on that depth information. You can see the generated UVs here and how they try to match the perspective of the building:

That’s very much doable in Babylon, with a lot of ~mathz~ :rofl: it’s an interesting idea :slight_smile: @PirateJC might be interested as he was thinking about AI and Babylon these weeks

2 Likes

Hm, what i see in this video you posted is more how i’d expected it to work… like a random projection of a generated image on some geometry. But in these demos by the creator, somehow he’s getting Stable Diffusion to create an image that matches the geometry of the model. I’m assuming he’s using the model itself to prompt SD? Need to look into that code when get a moment.

But see awesome demos here and on his twitter feed:

I thought this YT vid is decent explanation Dream Textures - New Blender A.I Tool For All! - YouTube

Ah, here’s the diffusers pipeline. Got to study this! Carson says:
It’s very similar to the inpainting model, except instead of concatenating the latents, mask, and masked image latents, you concatenate the latents and a depth map.
But still, i wonder why it works great in Carson’s demo, but doesn’t seem to be matching the geometry in other demos. I got to imagine it’s more than a depth map - b/c depth map wouldn’t be able to specify ā€œRender this specific part as a Window and this other part as a Wallā€, right?

Ah some more explanation ā€˜It works by using the scene as an initial image, then projects generated images from several views back onto the object, assigning a separate material for each angle.’

1 Like