Get Picked Position from GPU Pick Discussion

So of course we all know you cant directly get the position of the picked point from the GPU in our current setup, but that does not mean its not possible to.

The way you would do it now with our setup, is you would identify first the mesh from the GPU then you have to decide to run a scene.pick(x,y,predicate) with that mesh as the predicate yadada, which ends up tapping into the CPU anyways. Things get more complicated as you have more predicates you have to support for situations etc.

What if we had the option when you create the gpu picker to do a secondary rtt that captures and digests the world position of the picked point? It would basically have to follow the same flow as we have with the gpu picker currently to capture mesh ID (and could be skipped possibly if timing allowed and the mesh ID came back null), but applies a material that displays the world position.

This would allow us to return the picked point optionally with GPU picking.

Would this be efficient? Would just doing the secondary CPU pick be better at that point? How far could something like this be taken, for example if its efficient could we then include option passes for uv, and normal?

I was going to try to whip this up myself but then saw there where some things in the GPU Picker class that Ill need to digest before attempting that.

The other way I was thinking of getting the position was changing the shader to use a float texture, getting rid of the unique colors and just using the r channel for id, that way on the shader pass in the g channel we could stick the depth value, then from the camera matrix, and screenXY, then depth we could also determine the world position without doing another pass. That would also leave the blue and alpha channels open to render the spherical coordinates of the normal data.

That would then pack meshId, depth, theta, phi in the shader pass which we could decode really fast into usable data.

Just wanted some input to figure out if this is a waste of time or not.

As for picked point: We have already done this. It is burried in the orginal Announcement thread. You can use the existing position texture from the Geometry Buffer Renderer or from the Prepass Renderer.

I also tried general raycasting with the GPUPicker but performance was worse compared to CPU raycasting. See comments. Maybe you can improve?

1 Like

I already adapted the class to use float a texture, r channel for id, green for depth and blue alpha for sphericalNormal coordinates. I am able to now get the mesh id, world position, and normal position of the picked point in a single pass from the GPU.
Now I just need to add instance support back in.

@Deltakosh or @roland if I get a prototype of this class (which is just copy and paste of the BJS GpuPicker Class) into a PG to demonstrate. Can we take a look at the feasibility of making the our actual class run like this? I’m worried about how to handle the thinInstances, Instances will not be an issue, because we still have the fractional value in the id that we can use for a secondary id.

1 Like

There are still some slight oddities with creating the ray from screen space and applying the depth to that, but I’m sure those can be fixed up.

If you move your mouse you will see the closer to the “center” of the screen the sphere kind of dips down into the plane which it should not.

But yeah here is a quick and dirty proof of concept.

Think we could expand on this? Getting the position and normal data from the GPU pick will really make this system useful.

Here is the corrected version that fixes the depth value when the pick is not from the center of the screen.

I for sure broke instances though… :frowning:

UPDATE
Here is basic instance support, https://playground.babylonjs.com/#NIMS34#12

I do need to pack the baseId and the instanceId into the whole and fractional parts of the float still but that can wait. Also need to figure out how to support resizing, but Im sure that will be easy.

UPDATE UPDATE

here is the most recent, still not really getting the id of the instance correct, but that is because I have been dabbling in my local environment and have come across a mesh that breaks GPU picking period. Trying to figure out why that is, but it seems to have something to do with the mesh having VertexColors and some weird normals.

1 Like

So I totally got all this figured out now on Frame and it works really well!

Buuuut as we are starting to use it now the topic came up of what about ADT textures. So I had to leave the internal pick for pointer downs to keep the buttons working.

That had me thinking. It probably would make more sense to return the UV coordinates or have the option to instead of the normal(phi/theta) in the BA channels.

What I was wondering is if I have the picked mesh and the picked UV cords would it be possible to pass that synthetically to the ADT bound to that mesh for the mouse events and have the UI elements activatable with the GPU picking?

1 Like

I think so.

It all happens here:
Babylon.js/packages/dev/gui/src/2D/advancedDynamicTexture.ts at master · BabylonJS/Babylon.js

The root idea is that the scene raises that observable with the expected data: onPrePointerObservable allowing the texture to capture the event before the scene.

So if you can raise that observable on GPUpicking you can simulate the same behavior

The core is calling it here: Babylon.js/packages/dev/core/src/Inputs/scene.inputManager.ts at a87b232ef1412ca32fb0f14659abf51b0b862293 · BabylonJS/Babylon.js

My recommendation is to use this: Babylon.js/packages/dev/core/src/Inputs/scene.inputManager.ts at a87b232ef1412ca32fb0f14659abf51b0b862293 · BabylonJS/Babylon.js

This is meant to simulate clicks and events

2 Likes

Ok so looking at this, its not really he UV that I need to pass. Its looks as though when I fire my GPU picks I can pass the pointerInfo from the pick event to the synthetic methods and the ADTs should just work it appears?

Correct

1 Like

Ok sorry to keep bugging you on this, so the way Im doing it now seems to freeze up the scene with an error about the ray. Im assuming its because there are still some missing things to populate into the IGPUpointerInfo

right now I have:

Currently my Ray is made with:

Which the x/y is coming from the _executePicking props.
When outputting the UV its getting me the correct values (ignore that is says normal.x and normal.y for the values I was being lazy for testing and will fix the names later)

Is there something else that I need to include in the pickingInfo to get the simulated pick to work?

Maybe I dont have the second property correct here:

But I only saw examples for XR “pointerEventInit” and am not sure what to do with just normal pointers.

Here is the error btw:

yeah that will be tough without a repro :frowning:

hmm ok, Ill see what I can do!

We ended up ditching this, it looks like the GPU timings go up too high with our setup.

Good old refined CPU picking system for the win.

I guess if we needed to do 100k balls flying around that where pickable this would be the ticket.

1 Like