I’m trying to create a G-Buffer sort of thing using MultiRenderTarget.
I need a way to encode the per-pixel depth/distance from camera, but I’m struggling to find a performant/working approach.
I think there are two ways to go about this and here’s what I’ve tried so far:
Ideally, re-use the depth attachment of the MRT, to avoid creating an additional attachment just for depth values. The issue I’ve ran into with this is that when generatedDepthTexture is set to true, the depth precision is only 16-bits, i.e. Babylon uses DEPTH_COMPONENT16 for the depth texture. This is not enough precision for the use case I have, so is there a way to force the use of DEPTH_COMPONEN32 instead? I’m targeting WebGL 2 so I think that is supported, could be wrong… The following playground is what I’m using to analyze the formats with Spector.js: babylonjs-playground.com/#L2IU03#11
If depth texture isn’t an option because of above limitation in the depth component, then I could create an additional attachment in the MRT which is used to encode depth values with a format of GL_R/32F. However, the issue here is that I don’t see a way to give the MRT’s textures different formats, the MRT implementation seems to always assume the formats are GL_RGBA. This adds a lot of overhead, because instead of a GL_R/32F render target, I get a GL_RGBA,4x32F render target. The additional 3-channels aren’t needed… Is there a way to get MultiRenderTarget to have different formats per attachment? (IMultiRenderTargetOptions only allows for specifying texture types, not formats.)
@sebavan, looks like WebGPU already uses Depth32F for it’s attachment by default. There could be more configurability here (presumably a lower bpp format could be used) but I don’t know if it makes sense to lump into this change.
For WebGL implementation, there are two ways I see to go about this, so let me know if there is a preference or another option I’m not seeing.
The first idea is a small change that would involve defaulting to DEPTH32F on WebGL 2 and then DEPTH16 on WebGL 1. The reason I think this is a plausible option is because that is how thinEngine.ts determines the format it uses for depth textures. The downside is that folks previously using MRT’s depthTexture will now have the format change from under them on WebGL 2. That seems potentially like a breaking change, or at the very least could have some memory usage implications. It is however, a very small/simple code change.
The other idea I had is what I posted above. It’s more complicated but allows the exact format to be specified (DEPTH16/32 or 24_S8). If GL 1, then only 16 is supported (I think, and without checking extensions), if GL2, all should be supported.
I think it’s better to stay backward compatible and have the options as you did in your PR.
Regarding WebGPU, we should default to the same thing than WebGL (Constants.TEXTUREFORMAT_DEPTH16) and use the new option to select Constants.TEXTUREFORMAT_DEPTH24_STENCIL8 or Constants.TEXTUREFORMAT_DEPTH32_FLOAT.
[EDIT] Actually, if format is DEPTH16 we should use DEPTH32_FLOAT instead for the time being as Chrome does not support DEPTH16 yet.