Multiple cameras to generate multiple textures at the same time

You are my hero :green_heart:

I successfully managed to finish what I was trying to create.
You can see the result here:

I don’t know if you are into NFTs but if you give me a Tezos address I would be very happy to send you 1 edition of this work as a thank for your help.

Now I’m stuck with another thing that is probably easier than I think but I cannot achieve.
How can I get a copy of a dynamic texture state and save it as a regular texture? I’ve seen there is the raw texture parameter but I wasn’t able to get a regular texture out of it.

Thanks a lot :slight_smile:

I don’t know what NFT is, so I’m probably not into it :slight_smile:

I’m happy that you finally achieved what you wanted to do, that’s my reward!

Why do you mean by “saving” the texture? Do you want to generate a picture from it? Or do you want to use it as a texture in a material (in which case you can simply plug it into the material texture property)?

NFT are non-fungible tokens, give them a look it’s an interesting subject, they are revolutionizing the art world lately :slight_smile:

What I need to do is to extract a static texture from an animated one.
I want to create a snapshot of a RenderTargetTexture on a given moment and turn it into a regular texture.

is the getInternalTexture() method I need to use?
I’ve seen there is also a readPixels() method but it sounds resource intensive to use.


You can use readPixels to get data back and create a new texture from that. It could be a bit slow as there’s a roundtrip with the CPU.

You can also create a custom procedural texture that would simply copy the source texture. That would be faster than readPixels as everything would stay on the GPU side.

But I think the easiest way to do it is to use the EffectWrapper / EffectRenderer classes:

After 1s, it will copy the diffuse texture of the sphere material and set the copy as the diffuse texture of the ground material.

Thanks again, I’ll dig into it :slight_smile:

I think you can use TextureTools.CreateResizedCopy with the same size to copy with GPU. :slight_smile:

Even better!

Hi bghgary,

thanks a lot for the answer, sounds like a great way to achieve that, now I try to implement it.

Hi Evgeni,

I’m having another issue, what I’m trying to achieve is to split a texture created with RenderTargetTexture into pieces as dynamic or regular textures. But I can’t understand how to read the image from the texture generated by RenderTargetTexture because it doesn’t have the getContext() method that a regular dynamicTexture has.
The TextureTools.CreateResizedCopy suggested by bghgary maintains the original texture type and your method using shaders is a little complicated for my need.
The lag coming for readPixels is totally fine with me but I don’t know how to turn the result into a new texture.

How do I turn a Uint8Array into a texture? I tried with many online suggestions but every time the texture is empty.

Thanks a lot

You can use RawTexture.CreateRGBATexture:

I get an error saying:
tx.readPixels(…).then is not a function

Looks like I can’t use it as a promise, is it because I’m using a texture generated with RenderTargetTexture and not a regular texture as in your example?

The readPixels() returns an array but the .then method is not available

I managed to make it work using await tx.readPixels() instead of the promise style.
Thanks a lot :slight_smile:

Is there a method to know when a texture generate with RenderTargetTexture from a camera is filled?
If I use it straight away even if the isReady parameter is true the resulting texture is black. If I use a setTimeout with 50 milliseconds I get the proper image.
I’m not confident using a setTimeout cause if it runs on a slower machine or the machine is busier and it takes longer it will result in a black texture.

If I use the tx.onAfterRenderObservable to wait I get a texture that is grey and not anymore black but without the full rendered image in it

That’s because I think you are using 4.2 and in 4.2 readPixels is synchronous and returns an ArrayBufferView, not a Promise<ArrayBufferView>.

I think you should be able to use RenderTargetTexture.onAfterUnbind.

Thank you,
onAfterUnbind seemed to work but after some more testing it appears that once every 10 times (it’s not a fixed amount just a variable observed statistic) the textures is not filled. Any idea if it should be working 100% of the times or if it is a possibility?

Hum, I think to be on the safe side you should use engine.onEndFrameObservable instead.

Another solution that should work is to use Tools.DumpFramebuffer(width, height, engine, successCallback, mimeType, fileName); in the onAfterRenderObservable observer. DumpFrameBuffer will read the data from the currently bound texture, which is still the RTT at this point.

You are amazing Evgeni, it’s so cool to learn this amazing library knowing that I can rely on somebody like you.
engine.onEndFrameObservable seems to be reliable 100% of times.
Thanks so much :slight_smile:

Hi, I’m back with the question of the day :slight_smile:
Now everything works perfectly but I have a small issue with the Sampling Mode.
When I use BABYLON.Texture.NEAREST_NEAREST with low res textures to achieve a pixelated effect I always have an artifact, a thin line in the middle of the textures. I’ve tried all similar sampling modes but the issue is always there. Even if I set it in the constructor instead of later using tx.updateSamplingMode.

Any idea where that artifact is coming from?

EDIT: I found a working solution using non power of 2 sizes for the RenderTargetTexture, it results in slightly less defined textures but the artifact is gone, i guess thanks to the internal texture resampling. But still, if possible, I would like to understand what’s causing the artifact in the first place.

Thanks as usual


Hard to say without a repro. Have you tried to set the wrap mode to clamp?

Using tx.wrapR = BABYLON.Texture.CLAMP_ADDRESSMODE totally fixed the issue.
I virtually hug you :slight_smile: