GPU picking point and normal example

This is pretty cool :slight_smile: I wonder how we could make it a Babylon feature and when exactly the perf tips from cpu to gpu picking

Their coordinate picking is still based on scene.pick This method, the underlying implementation also seems to use rays.

First, create a dedicated depth map for the pickup list.
Add parameters in the pick method?
Or add a property to the scene?
I think that ray picking and GPU picking have their own advantages and disadvantages, and giving users the freedom to choose seems to be the optimal solution.

1 Like

I’m sorry to hear that. I haven’t looked at the pick method in detail. But if your applications are time-critical, I would suggest running time-intensive functions with gpu.js.

Also, I asked GPT what fast methods are out there.

GPT
The fastest methods for ray picking, also known as ray casting or ray tracing, depend on the specific context and requirements of the application. Here are a few commonly used techniques:

  1. Bounding Volume Hierarchy (BVH): Construct a hierarchical data structure, such as a BVH tree, to accelerate ray-object intersection tests. The tree organizes objects based on their spatial relationships, allowing for efficient pruning of large portions of the scene during ray traversal.
  2. Spatial Partitioning: Divide the scene into spatial partitions, such as grids or octrees, to quickly identify potential intersections with objects in the ray’s path. This approach reduces the number of intersection tests required by excluding objects in empty or irrelevant partitions.
  3. GPU Acceleration: Utilize the parallel processing power of modern graphics processing units (GPUs) to perform ray-object intersection tests efficiently. Techniques like GPU ray tracing or shader-based algorithms (e.g., ray marching) can leverage the GPU’s computational capabilities for real-time ray picking.
  4. Ray-Sphere or Ray-AABB Intersection Tests: For simple shapes like spheres or axis-aligned bounding boxes (AABBs), use specialized intersection tests that can be computed quickly without complex computations.
  5. Culling Techniques: Implement early exit strategies, such as back-face culling, to discard objects facing away from the ray’s origin. This can help reduce the number of intersection tests performed.

It’s important to note that the choice of method depends on factors like scene complexity, object types, available hardware, and performance requirements. A combination of these techniques or additional optimization strategies may be necessary to achieve the desired level of performance in ray picking applications.


Last but not least for your interesst this is demo shows the cost of raycasting per pixel or ray:

Ray-Casting Algorithm Internal Stuff - YouTube

Have nice day! :slight_smile:

1 Like

Ray pickup speed can be accelerated at Babylon using octree.
I chose to study GPU pickup because there were other factors besides speed. For more information, see: How to obtain the correct picking result after morph application? - #11 by xiehangyun
Of course, using the GPU .js is also an important recommendation that I will learn and try to use in my programs.
Thanks for your reply!

GPU picking point and normal | Babylon.js Playground (babylonjs.com)

I’m trying to integrate GPUDepthPicking with GPUColorPicking.
This enables simple GPU picking.
Can I ask for your help and submit a PR when the feature is complete enough.

GPU picking point and normal | Babylon.js Playground (babylonjs.com)
This is the biggest performance optimization I can do, and it’s probably still not as fast as spatially demarcated ray pickup.
The advantage is that vertex changes that only exist on the GPU side, such as warps and skinned meshes, can be picked up correctly.
If you can ensure that the camera is stationary and the picking list is unchanged, using cached readPixels data may greatly improve speed.
GPUDepthPicking is WebGPU compatible.

cc @Evgeni_Popov

GPU picking point and normal | Babylon.js Playground (babylonjs.com)
For better performance, I delayed picking until the next frame, but this caused picking to get incorrect results when the camera moved.
On line 76 of this case, after commenting, drag the camera and then click, you will see the above problem.
Do you have any good ideas?

ok, the problem with incorrect pickups is resolved.
Enable the cache buffer, which can be picked up at static to improve performance (provided that the pick list is consistent).
When the camera is in motion, the cache buffer is disabled.
The scene object is not considered to be animated.
:laughing: :laughing: :laughing:
GPU picking point and normal | Babylon.js Playground (babylonjs.com)

Yes, no problem!

I think you should make a single GPUPicking class, that will be able to either do depth/normal or color picking, or both at the same time. The latter is why it should be a single class, we don’t want to render the scene two times in this case, but a single time. So, we will need multi-render target, to generate the depth and the color when we need both (which means we won’t use the depth renderer, which can only render the depth).

Regarding color picking, we should support picking at the mesh or face level, and it should work with instances / thin instances.

1 Like

I also think so, GPUDepthPicking is just a subclass of GPUPiking.
It has basically implemented the functionality.
But I don’t know how to render depth with renderTargetTexture alone, so I can only use the depth from the main thread frame rendering.
In addition to creating renderTargetTexture separately for it, I was wondering if it would be possible to reduce the rendering consumption and render only specified areas.

Regarding GPUColorPicking, the cited examples are the treatment of mesh, instanced, and thin instances.
But mixing them up is too complicated, so I haven’t officially started development yet.

Yes, I think we should also support rendering in a specific rectangle, when we know in advance the region we want to pick. It should be possible to do this with a scissor rect.

No doubt there would be a lot of work to do to implement the functionality with all the features we’d need in Babylon.js, but it would be a great addition to the framework!

How scissor rect is implemented is completely unclear to me.
MultiRenderTarget render depth and color textures also need reference.
How to render color texture at the triangular plane level.

I want to start by using MultiRenderTarget render depth to implement the DepthPicking part.
Try scissor rect rendering.
Then render color texture based on this to achieve colorpicking.

Are there any references for MultiRenderTarget and depthRender and scissor rect?

A scissor rect tells the underlying rendering system that you want to render only a subset of the picture: you give the (x,y) offset of the top left corner of the rectangle + the (height, height) of the rectangle and the rendering will only occur inside this rectangle.

Here’s an example which is using a scissor rect:

Regarding multi-target rendering, there is the documentation of the class as well as some PGs, like this one. You should be able to find more from this forum or searching the playgrounds from the documentation web site.

2 Likes

Okay, wait for me to study it!

GPU picking point and normal | Babylon.js Playground (babylonjs.com)
I tried enableScissor to render deep textures, but it doesn’t seem to be rendering faster.
readPixels seems to have an increase in speed, which could be a normal fluctuation.

You won’t be able to time the change with console.time because the rendering happens in parallel, on the GPU. You should look at the “GPU frame time” counter in the “Statistics” pane of the inspector instead.


average GPU frame time。
There still seems to be a gap, although the gap is very small in terms of cpu.

MultiRenderTarget | Babylon.js Documentation (babylonjs.com)

Render Target Instances | Babylon.js Playground (babylonjs.com)

GPU picking point and normal | Babylon.js Playground (babylonjs.com)