MultiRenderTarget with 2D Array Textures

Hello all, and happy New Years!

I have recently been creating fluid simulations in WebGL2 that compute on a uniform grid. To do this, I have been using 2D array textures to represent a 3D field of values. I have been experimenting with using MultiRenderTargets so I can modify more values in a single pass. I have been able to do this successfully on 2D textures

However, I have been unable to do this on 2D array textures (e.g. by using a layers parameter in the size object as one might on a RenderTargetTexture). I took a look at the source code and it appears that the ThinEngine function that MultiRenderTarget uses to create a multiple-render-target instantiates all textures as 2D textures

Is it possible to use a MultiRenderTarget for 2D array textures?

When setting a number of layers > 1 when creating a RenderTargetTexture, each layer is rendered in turn: the onBeforeRenderObservable observable is notified with each layer number. But the rendering is done in a single layer at a time.

We don’t support binding multiple layers of a 2D array texture as render targets.

cc @sebavan, I’m not sure it would be an easy task to add this support to the MultiRenderTarget class (RenderTargetWrapper would also be impacted and only works with InternalTexture at the time)…

As opposed to rendering multiple layers of one gl.TEXTURE_2D_ARRAY simultaneously, would it be possible to render the same layer of multiple gl.TEXTURE_2D_ARRAYs? I imagine that would be more consistent with the onBeforeRenderObservable layer update.

I’m not sure it would be easier to add this support, and it would only be partial support for the feature, with a narrower scope than support for binding any layer from one or more 2D texture arrays.

Agree it could be cool to map texture array to multi RTT. This is not a simple task as @Evgeni_Popov mentioned but we could help if you fancy contributing @Eron_Ristich ???

I’d love to help, but first I’m definitely going to have to study the source code more. I’ll get back to you guys with some ideas once I know more about what is necessary for this to be possible.

1 Like

Alright, after some thought, this is the rough idea.

  1. 2D Array Textures are created by modifying .createMultipleRenderTarget(...). If the size object has a layers parameter, then it creates a set of 2D Array Textures stored in .textures. By default, the 0th layer of the nth texture is bound to the nth color attachment.

  2. An additional MRT option would be a renderToLayers array, which would specify which layer of each texture should be bound to each color attachment. If renderToLayers is defined, then rendering occurs as it has before, with only a single pass where the renderToLayers[n]th layer of the nth texture is bound to the nth color attachment.

  3. If renderToLayers is not defined, then rendering occurs in as many passes are there are layers. To render in this way, the ith layer of the nth 2D array texture is bound to the nth color attachment, where i loops from 0 to the number of layers and n loops from 0 to the number of textures. This would be done at render time.

I think this covers most of the behavior mentioned before.

Done explicitly by using renderToLayers and a single render call

Done by default when renderToLayers is not defined; my initial idea was that a render call would iterate through every layer in layers as is done in a regular RTT, except each texture’s currently rendering layer would be attached to the next color attachment. Similar to a regular RTT, every layer is looped through, and bound to a color attachment. Unlike a regular RTT, multiple color attachments are bound per layer.

I haven’t done much extensive testing, but I don’t think the performance cost of this is significant.

What are you guys’ thoughts?

Sounds ok on the description here. @Evgeni_Popov can you confirm ???

I think that if we improve the MultipleRenderTarget class, we should handle all cases now to avoid a new iteration in the future. That means:

  • we should be able to use any combination of textures (2D texture, 2D array texture, cube texture, cube array texture (is that supported in WebGL2?) and 3D texture) when creating the MRT
  • we should be able to bind multiple layers of the same array / cube / 3D texture

For eg, if a MRT is defined with a 2D texture A, a 2D array texture B and a 3D texture C, we could bind B.layer0 + C.layer3 + A + B.layer1.

But maybe what I’m suggesting is a bit too general…

I’m afraid that just being able to create a MRT with only 2D array textures and binding a single layer of each texture is a bit too restrictive or too specific to a particular need…

[EDIT] Also, I don’t know if we can do that as a non breaking change of the public API…

Fair enough. We could add additional MRT options for targetType (an array specifying 2D texture, 2D array texture, etc.), and faceIndexOrLayer (an array specifying what component of the texture should be bound at each attachment).

I don’t think so? From WebGLRenderingContext.bindTexture() - Web APIs | MDN

A gl.INVALID_ENUM error is thrown if target is not gl.TEXTURE_2D , gl.TEXTURE_CUBE_MAP , gl.TEXTURE_3D , or gl.TEXTURE_2D_ARRAY .

If it doesn’t exist in WebGL2 then a faceIndexOrLayer array is sufficient to specify attachments for every other type of texture.

I have put my first attempt into a gitpod workspace: It should be backwards compatible.

The way this version works is by creating textures of the right type and attaching them at initialization, and modifying the .setInternalTexture and related methods to be able to attach a diverse set of texture target types to a specified color attachment. It also adds the two options I described before. I also added target types to Constants.

I modified /Babylon.js/packages/tools/babylonServer/src/createScene.js to run some tests reproducing the aforementioned scenario. To sum it up;

  1. I create three textures
    A: a 2D Texture
    B: a 2D Array Texture
    C: a Cube Map Texture
  2. I then bind B.layer0 + C.face3 + A + B.layer1
  3. I finally render these textures to a plane
let mrt = new BABYLON.MultiRenderTarget(
    { width: 32, height: 32, layers: 32 }, 
    4, // number of draw buffers
        types: new Array(3).fill(BABYLON.Constants.TEXTURETYPE_FLOAT),
        samplingModes: new Array(3).fill(BABYLON.Constants.TEXTURE_NEAREST_SAMPLINGMODE),
        targetTypes: [BABYLON.Constants.TEXTURE_2D_ARRAY, BABYLON.Constants.TEXTURE_CUBE_MAP, BABYLON.Constants.TEXTURE_2D],
        faceIndexOrLayer: [0, 3, 0, 1]
// Sets the attachment at index 3 to the 0th texture in the list
mrt.setInternalTexture(mrt.textures[0].getInternalTexture(), 3);

Also, does WebGL2 support rendering to 3D textures? From documentation I would expect to be able to bind a layer of a 3D texture by using gl.framebufferTextureLayer but I wasn’t able to reproduce this.

[EDIT] Maybe we could also consider hard capping the value of count to the value queried by gl.getParameter(gl.MAX_DRAW_BUFFERS)

@sebavan @Evgeni_Popov

Yes, indeed.

Note that we will also need to be able to update the faceIndexOrLayer for each texture after creation, too.

It doesn’t exist in WebGL2 but it does exist in WebGPU, so we will have to support it.

I can’t access it. When I click on “Open workspaces”:

Anyway, I think it looks promising, so maybe you should create a draft PR on the Babylon repo and we can continue the discussion there?

Yes, it should work according to this post:

Draft pull request created: