I’m trying to create a G-Buffer sort of thing using MultiRenderTarget.
I need a way to encode the per-pixel depth/distance from camera, but I’m struggling to find a performant/working approach.
I think there are two ways to go about this and here’s what I’ve tried so far:
Ideally, re-use the depth attachment of the MRT, to avoid creating an additional attachment just for depth values. The issue I’ve ran into with this is that when
generatedDepthTextureis set to true, the depth precision is only 16-bits, i.e. Babylon uses DEPTH_COMPONENT16 for the depth texture. This is not enough precision for the use case I have, so is there a way to force the use of DEPTH_COMPONEN32 instead? I’m targeting WebGL 2 so I think that is supported, could be wrong… The following playground is what I’m using to analyze the formats with Spector.js: babylonjs-playground.com/#L2IU03#11
If depth texture isn’t an option because of above limitation in the depth component, then I could create an additional attachment in the MRT which is used to encode depth values with a format of GL_R/32F. However, the issue here is that I don’t see a way to give the MRT’s textures different formats, the MRT implementation seems to always assume the formats are GL_RGBA. This adds a lot of overhead, because instead of a GL_R/32F render target, I get a GL_RGBA,4x32F render target. The additional 3-channels aren’t needed… Is there a way to get MultiRenderTarget to have different formats per attachment? (IMultiRenderTargetOptions only allows for specifying texture types, not formats.)