Best way to get only visible vertices?

What’s the best way to get vertices that are visible in the camera?
Would like to ignore occluded vertices and/or vertices behind the camera.

I’m considering using a depth map projected to camera space to filter z buffer values, but would like to know if there’s a better approach

This is one example PG
But it looks quite involved

You can also think about doing it with the CPU. Checking which meshes are in the camera frustum is easy to do
The system is actually doing it for you: scene.activeMeshes is exactly what you need

Occlusion will not be taken in account with that approach but at least it will be super fast

Ok actually I’m more interested in the vertices level, not just the full mesh
Like knowing which vertices are occluded or not
Is there a way?

Did you evaluate occlusion queries?

By rendering the mesh with points, you should be able to check if any is visible or not.

Thanks @Cedric, I think that’s on the mesh level

Actually I would like to go further than that, and directly check if which parts of the vertices are visible or not
For example, I want to know whether vertex 0, 100 or 1000 is visible in camera view, is there a way to do this?

vertices = mesh.getPositionData(true, true)
for(let i = 0; i < vertices.length / 3; i++) {
   if (isVertexVisible(i, activeCamera)) {  // looking for ideas on how to do this
      // if vertex is in camera view BUT occluded by another vertex/face, return false

If you only have a few vertices to check, you can try to do scene picking

with a ray going from the camera position to the vertex position you want to check. if the result is the same position, then it’s visible. if not, it’s hidden behind another mesh.

Yep I’ve thought about that : P
The thing is this would be really slow if there are 100k+ - 1mil vertices, wondering if there’s a faster way

I think with webgpu + a compute shader it’s possible to do it quite fast. Compute clip-space position in the CS, sample depth buffer and fill a texture with the check value.
I guess it’s possible to sample depth buffer on CPU but it will be slower for a huge number of vertices.

1 Like

Any solution to this that doesn’t involve shaders and webgpus?

I’m considering using a depth map projected to camera space to filter z buffer values, but would like to know if there’s a better approach

This is one example PG

But the values are super finicky for more complex meshes

Yes, it can be a tedious work.
Line 52, the depth buffer is retrieved from the GPU ram to the CPU ram. This has a cost. It’s better to keep infos in GPU ram and do the work exclusively on the GPU for better performance.
This said, if you want to keep using the datas CPU, you will have to compute an object bounding box, project it in clip space, read all values corresponding to its pixels and deduce if it’s behind geometry or not.
Another solution is to compute a mipmap chain using a shader. Each value is the depth the most far away instead of the average. Then, depending on the size on screen of the object, pick the mipmap chain corresponding.
If all depth value of the object pixels are > than the mipmap value, then it’s hidden. Otherwise, it’s, at least, partially visible.

It’s more or less what occlusion query does.

1 Like

Hello, just checking in, was your question answered? @mrlooi

Kind of, though the implementation is the main request the post was about :stuck_out_tongue: