Multiple cameras to generate multiple textures at the same time

Hi everybody, I’m loosing my mind trying to create multiple textures from multiple cameras.
I successfully managed to create 1 dynamic texture from a camera but I cannot manage to create more than 1. I’ve tried using the MultiRenderTarget but it’s too complicated for my small brain and I cannot make it work.

When I try to create 2 of them the first one gets lost. Reading online I’ve understood that I have to add the multiRTT as a custom render target to each subcamera and not for the whole scene but I still cannot make it work.

This is the post where I think there are the info I need:

But still no success.
In my setup I have 1 main camera that shows 2 objects where I need to use 2 other cameras as textures, one on each object.

This is what I did so far:

Thanks a lot :slight_smile:

Welcome aboard!

What do you want to achieve exactly? Do you want to render the scene from camera 1 in texture 1, render the scene from camera 2 in texture 2 and use texture 1 as diffuse texture for plane 1 and texture 2 as diffuse texture for plane 2?

If that is so, you don’t need a multiRTT, only two RenderTargetTexture.

Hi Evgeni,
thanks, both for the welcome and for the answer.
So are you telling me I wasted 2 days? :grinning:
When I tried using 2 RenderTargetTexture the first one became black once created the second one, but now that you say this I guess it was because I made some mistake, maybe didn’t add both of them to the scene as activeCameras?

Yes, I guess you made a mistake somewhere:

Thanks a lot!! :smiling_face_with_three_hearts:
Do I need to keep 2 different scenes or I can use just one and have it still working?

It will work with a single scene but you will need to show the cubes before rendering the RTTs and hide them afterwards if you don’t want to see them in the regular scene:

thanks again.
My scene is set up differently than the example, the capture area is away from the main camera which is fixed so I don’t even need to hide them, but thanks for showing me how to do it.
I really appreciate the help you gave me :green_heart:

It works perfectly, thanks a lot!
Do you know why I cannot clone a texture created with RenderTargetTexture? The cloned one is black.
I also tried to push the new clone texture to the customRenderTargets Array but with no success.

You will need to push the clone in the customRenderTargets array if you want it not be black.

I think you clone it too soon, before setting the renderList of the RTT being cloned: in that case, the renderList of the clone will be empty. Also, you should set the activeCamera property of the clone as it is not set during the cloning process:

If you look into the Textures entry of the inspector, you will see two rtt1 textures and both have data.

You are my hero :green_heart:

I successfully managed to finish what I was trying to create.
You can see the result here:

I don’t know if you are into NFTs but if you give me a Tezos address I would be very happy to send you 1 edition of this work as a thank for your help.

Now I’m stuck with another thing that is probably easier than I think but I cannot achieve.
How can I get a copy of a dynamic texture state and save it as a regular texture? I’ve seen there is the raw texture parameter but I wasn’t able to get a regular texture out of it.

Thanks a lot :slight_smile:

I don’t know what NFT is, so I’m probably not into it :slight_smile:

I’m happy that you finally achieved what you wanted to do, that’s my reward!

Why do you mean by “saving” the texture? Do you want to generate a picture from it? Or do you want to use it as a texture in a material (in which case you can simply plug it into the material texture property)?

NFT are non-fungible tokens, give them a look it’s an interesting subject, they are revolutionizing the art world lately :slight_smile:

What I need to do is to extract a static texture from an animated one.
I want to create a snapshot of a RenderTargetTexture on a given moment and turn it into a regular texture.

is the getInternalTexture() method I need to use?
I’ve seen there is also a readPixels() method but it sounds resource intensive to use.


You can use readPixels to get data back and create a new texture from that. It could be a bit slow as there’s a roundtrip with the CPU.

You can also create a custom procedural texture that would simply copy the source texture. That would be faster than readPixels as everything would stay on the GPU side.

But I think the easiest way to do it is to use the EffectWrapper / EffectRenderer classes:

After 1s, it will copy the diffuse texture of the sphere material and set the copy as the diffuse texture of the ground material.

Thanks again, I’ll dig into it :slight_smile:

I think you can use TextureTools.CreateResizedCopy with the same size to copy with GPU. :slight_smile:

Even better!

Hi bghgary,

thanks a lot for the answer, sounds like a great way to achieve that, now I try to implement it.

Hi Evgeni,

I’m having another issue, what I’m trying to achieve is to split a texture created with RenderTargetTexture into pieces as dynamic or regular textures. But I can’t understand how to read the image from the texture generated by RenderTargetTexture because it doesn’t have the getContext() method that a regular dynamicTexture has.
The TextureTools.CreateResizedCopy suggested by bghgary maintains the original texture type and your method using shaders is a little complicated for my need.
The lag coming for readPixels is totally fine with me but I don’t know how to turn the result into a new texture.

How do I turn a Uint8Array into a texture? I tried with many online suggestions but every time the texture is empty.

Thanks a lot

You can use RawTexture.CreateRGBATexture: