Check if a point is in front or behind the mesh

I’m currently trying to solve the same problem this guy had, except I’m doing in Babylon:

You can also refer to this sketchfab model, as it is doing what I’m hoping to achieve:

I want to be able to find if a point P is behind the meshes. I have been using the normal of the mesh at the point P and checking if the normal is opposite to the camera lookat angle:

eq: dot( normal_p, camera.target - camera.position) < 0

but that doesn’t work well for meshes with folds, like a cloth or crumpled paper mesh.

For the best accuracy, I would have to fire a ray at the direction of P from camera, and check if P is being picked, but doing so would drastically reduce my performance as ray picking is expensive, especially when I have dozens of P. It doesn’t really help even if I did all the ray picking once every 1000ms with a setInterval.

I have been looking into the depth renderer / depth map, but I am lost on how to retrieve the coordinates from the depth renderer.

Is there any suggested ways to perform this operation without adding too much load?

Can you explain exactly what you want to do?

I can only guess from the example that you want your hotspot sprites to change transparency based on whether or not they are occluded?

I want to show (or change opacity of) a HTML element at position P when P is not being occluded. P is a fixed position on the surface of a mesh. There might be several meshes in the scene, but for simplicity we can assume there’s only 1 mesh in the scene, not including the environment.

image

How I would do it if the circles are sprites (to always face camera) inside the scene (so real 3D objects):

  • render the scene without those sprites. Get the z-buffer of the scene in a texture
  • render only the sprites. In the fragment shader, check the current depth of the fragment with the one in the depth-texture at that location. Set the alpha value for this fragment depending on the check (if check says fragment is not the closest to the camera, us an alpha like 0.5, else 1.0).

If the circles are HTML element, it’s a bit more complicated:

  • render the scene and get the z-buffer (in an array)
  • for each circle, project the center coordinates manually (thanks to the 3D coords of the center in the world and the projection matrix of the camera). Then calculate the z-depth of this point and compare it to the z-buffer, like in the fragment shader above. The difference here is that your circle will be either fully opaque or fully transparent because we check a single point (the center) against the z-buffer, which is not what is done in the sketchfab sample (we can see that some circles can be partially transparent/opaque when we move around).

I’m thinking of the second method. It doesn’t have to be partially transparent like in sketchfab; that would just add unnecessary complexity.

How do you get the z-buffer with BabylonJS exactly? I have been trying to play with DepthRenderer.getDepthMap() but without much luck so far.

Here it is for case 1:

https://playground.babylonjs.com/#CUH660

2 Likes

Im pretty sure you just make it opacity drop if its depth value is more then the depth render return.

Just saw your playground @Evgeni_Popov what is it not doing that you want?

Thanks for the PG, that looks really awesome!. I’m thinking of implementing that Case 2 though, mainly because I have already written the React components for it. So, instead of accessing the depth map from GPU with a shader, I will need to read the pixel values from CPU.

But if data transfer from GPU to CPU kills too much performance, I will probably switch over to the Case 1 method >:D
Anyways, thanks for the help, I think I got a bit more clue on how to proceed.

Hi @Evgeni_Popov,

The shaders won’t work in v4.0.3, because scene.enableDepthRenderer only has one parameter in v4.0.3 (there’s no option to use non linear depth). What kind of transformation would be necessary to convert gl_fragCoord.z into the appropriate value for comparison in that case?

I have updated the playground to passe false as the second parameter to enableDepthRenderer:

https://playground.babylonjs.com/#CUH660#1

You need the znear / zfar camera clipping plane values: I have put them directly in the shader, you may want to use uniforms instead (those values are found in camera.minZ and camera.maxZ).

1 Like

Nice, I managed to derive at something similar to that by digging through babylon github repo, but this just gave me the affirmation I needed.

Again, thanks!

The 2nd solution is not practical:

https://playground.babylonjs.com/#CUH660#2

I did not finish the implementation because calling readPixels on the RenderTargetTexture each frame is a performance killer. So you should really use the shader solution.

2 Likes

I stand corrected: it’s a lot better if the buffer is created beforehand and handed to readPixels()!

So:

https://playground.babylonjs.com/#CUH660#3

I take the center of the sprite to check against the zbuffer.

Note also that I did like in the sketchfab, the sprites don’t change size when you move in the scene.

Enjoy! :slight_smile:

1 Like

Hello @Evgeni_Popov, I realize it’s been long but do you remember what the numbers on line 63 mean?

That transforms the -1..1 x/y coordinates of posInViewProj to 0..1. z is already in the 0..1 range so no need for any transformation for it.

The last multiplication will transform 0..1 to 0..(screenWidth, screenHeight).

3 Likes