How can I detect the occlusion of a Vector3 by a specific mesh with orthographic camera?

First off, congrats on 4.1! super exciting news and I’m looking forward to leveraging some of the new features in our large CAM application.

I’m working on a personal project and all was going well until I tried to cast some rays. I’m having a hard time working out what the right combination of positions etc I need to construct the ray properly.

Basically what I’m trying to achieve is as follows:
Given a Vector3 in world space, decide whether or not it is occluded by a known mesh.

I’m going to be using this logic in my project to manually cull the segments of a wireframe that should not be visible (the project takes a wireframe and the mesh for that wireframe, and computes the set of line segments to generate an svg for).

I’ve created a playground with a minimal demo where I

  1. construct a box
  2. put a target sphere on that box
  3. construct a ray from the camera to the target (this is where I think I’ve gone wrong)
  4. test if the ray intersects the box
  5. change the target colour based on the test.

https://www.babylonjs-playground.com/#669TCN#3

If you try the demo out, I would expect when the camera rotates around the box, when the target goes to the backside, it should become red. This is partially working, but it takes far more rotation than expected to make the transition happen.

I suspect the issue is that my from argument of the ray is the camera centre, but really I want the intersection of the camera plane and the screen space coordinate of the target?

I should add, I actually want this to work with an orthographic camera, so I’ve included the code for that in the demo. If you do turn off the orthographic camera it does appear to work, so maybe the issue is there?

any pointers greatly appreciated!

(edit, posted wrong link, oops)

OK so I’ve managed to get it working, but my solution feels super hacky

https://www.babylonjs-playground.com/#669TCN#4

I’m projecting the target to screen space, then unprojecting that with z=0 to get the coordinate at the viewport plane, then casting a ray from there to the target.

Doing the project is not the end of the world as I have to do this anyway on my coordinates to extract the lines in screen space in order to convert to svg, however it feels like for this specific demo case there must be a better way?

You could also use the zbuffer: calculate the depth of your point in CPU and check it against the zbuffer. If z == z_in_zbuffer then it is visible, else it is not. You should test with an epsilon, as you won’t get exactly the same z-value than in the zbuffer.

If you need to perform this check multiple times for a given frame, it may be faster than creating a ray and testing for intersections for each check, depending on the shape complexity of the mesh.

Also, if you know your mesh is convex, you can simply check each face of the mesh, and if it is front facing, all its vertices are visible, else they are not.