Use a Depthmap to help z-buffering

Let’s say if there are some meshes in the scene and we can render it as the foreground, I want to input a background image and its depth map, I wonder if I can use this depth map to “prefill” the z-buffer to help doing depth test, and set the background image as the default color to combine the foreground and background.

Currently I have implemented a postprocess to do the similar task, but I think it will be more efficient if there is a way to directly use the depth map during the depth test.

Thank you in advance for your help.

We can rely on depth texture in Babylon instead of render buffer attached to the frame buffer but I have never tried a prefilled version of it. I guess it might be simpler to run a full screen render and read the texture depth values to fill the buffer in ?

Then how to used depth texture in Babylon to fill the buffer in?

You can create a pass-through post process so that the scene is rendered into the texture of this post process and use the postProcess.onActivateObservable event to inject your background color/depth before the scene is rendered into it. You also need to disable the post-process auto clearing because this operation happens after onActivateObservable (and you don’t need it anyway as you pre-fill the texture with the background color).

See:

1 Like

Hello I tried your implemetation and I found some questions.

I changed the backDepth and the backColor as below:

for (let i = 0; i < width * height / 2; ++i) dataDepth[i] = 0.9;

TO

for (let i = 0; i < width * height / 2; ++i) dataDepth[i] = 0.9;
for (let i = width * height / 2; i < width * height; ++i) dataDepth[i] = 0.3;

for (let i = 0; i < width * height; ++i) {
dataColor[i * 4 + 0] = 255;
dataColor[i * 4 + 1] = 0;
dataColor[i * 4 + 2] = 255;
dataColor[i * 4 + 3] = 255;
}

TO

for (let i = 0; i < width * height / 2; ++i) {
dataColor[i * 4 + 0] = 255;
dataColor[i * 4 + 1] = 0;
dataColor[i * 4 + 2] = 255;
dataColor[i * 4 + 3] = 255;
}

for (let i = width * height / 2; i < width * height; ++i) {
dataColor[i * 4 + 0] = 0;
dataColor[i * 4 + 1] = 255;
dataColor[i * 4 + 2] = 255;
dataColor[i * 4 + 3] = 255;
}

But it seems to have the same appearance as before, could you plz tell me why?
See:

My bad, I forgot to set the “scale” uniform:

ok, I got your idea, you render the background color and depth after the the rendering of foreground object, and you disable the postprocess autoclear to remain the background color and depth for next frame, while the next frame can use the “cached” background color and depth to do rendering.

But does this mean that the first frame will be wrong? Since the foreground object will be all coverd by background.

Let’s say if the background images and correspoding depths are dynamically changing according to the camera, in your way, the rendering of background in current frame is always from last frame.

No, the background color and depth are copied to the post process texture before the scene is rendered into this texture, so if you change them you will have the correct data for the current frame.