Is it possible to get the depth data of the screenshot image

Hello Everyone,

I realized that with Babylon.js, it is possible to capture the screenshot from a specific active camera at a specific view angle:

However, is there any possibilities to also capture the depth information using someway (or some API). Basically, we want to create a virtual RGBD camera, which allow us to get the depth data as user change the view angle. It would be great if anyone could provide some clue of how to realize this with Babylon.js.

Thank you so much!

Hi & welcome! :slight_smile:

Due to chatGPT …

In Babylon.js, capturing depth information directly as part of the rendering process is not supported out-of-the-box. However, there are alternative approaches you can explore to achieve the desired result of creating a virtual RGBD camera with depth data. Here are a couple of options:

  1. Render depth information manually: You can render the scene from the desired camera perspective and manually calculate the depth information for each pixel. To do this, you would need to modify the rendering pipeline and use shaders to output depth values. This approach requires knowledge of custom shaders and rendering techniques.
  2. Render depth as a separate pass: Another approach is to render the scene in two passes: one for the color information and another for the depth information. You can use a custom shader to render the depth values to a separate render target or texture. After rendering both passes, you can access the depth information from the rendered texture.

Here’s an example of how you can implement the second approach in Babylon.js:

javascriptCopy code

// Create a separate render target for depth rendering
var depthTexture = new BABYLON.RenderTargetTexture("depthTexture", { 
  width: scene.getEngine().getRenderWidth(),
  height: scene.getEngine().getRenderHeight()
}, scene);

// Set the active camera for rendering depth

// Attach the depth texture to the render target
depthTexture.renderList.push(mesh1, mesh2, ...); // Add all the meshes you want to include in the depth rendering

// Render the scene to the depth texture

// Access the depth data
var depthData = depthTexture.readPixels();

// Process the depth data as needed
// ...

// Dispose the depth texture when no longer needed

In this example, we create a separate render target texture called depthTexture with the desired width and height. We then set the active camera to the desired camera and attach the meshes you want to include in the depth rendering to the renderList of the depthTexture. After calling depthTexture.render(), you can use depthTexture.readPixels() to access the depth data. Finally, remember to dispose of the depth texture when you’re done using it.

Keep in mind that these approaches require some knowledge of shaders and rendering techniques in Babylon.js. You may need to dive deeper into the documentation and examples to adapt these concepts to your specific use case.

I found this example in the playground to pick a point from depthMap which you can modify for your purposes.

GPU picking point and normal | Babylon.js Playground (

1 Like

The code given by chatGPT does not seem to be quite correct.
The code he gives only creates a RenderTarget, and does not get the depth information.

To get the depth information, it should look like this

let depthMap = scene.enableDepthRenderer().getDepthMap()
let data = await depthMap.readPixels()

That’s why it’s always important to check the results of ChatGPT code! :wink: