I have a postprocess where I would like to calculate a fragments viewposition from the fragments depth and the inverse projection matrix.
I would prefer not to have to enable “enableDepthRenderer”, which will make a separate render pass to only write depth, and RenderTargetTexture has a method called: createDepthStencilTexture which I would assume creates DepthStencilTexture for the render target, which brings me to the first question:
When creating a RenderTargetTexture, there is a parameter that is called: generateDepthBuffer
What would be the difference between having this parameter enabled (which it is by default), from manually calling createDepthStencilTexture?
Are they two entire different things?
What I’ve assumed is that createDepthStencilTexture makes it so that the depth buffer can be sampled from later, is this correct?
And if so my second question is: how do I actually sample it?
I’ve tried sending it to the post process effect with:
where currentFrameDepth is a texture sampler in glsl, and I’ve made sure that ‘currentFrameDepth’ is included in the list of samplers.
When I try to sample currentFrameDepth in the glsl shader, it’s always black on all channels except for w, which is always white.
Is this not how you’re to sample previously written depth values?
No playground for now, might put one in later
Thanks for reading