So of course we all know you cant directly get the position of the picked point from the GPU in our current setup, but that does not mean its not possible to.
The way you would do it now with our setup, is you would identify first the mesh from the GPU then you have to decide to run a scene.pick(x,y,predicate) with that mesh as the predicate yadada, which ends up tapping into the CPU anyways. Things get more complicated as you have more predicates you have to support for situations etc.
What if we had the option when you create the gpu picker to do a secondary rtt that captures and digests the world position of the picked point? It would basically have to follow the same flow as we have with the gpu picker currently to capture mesh ID (and could be skipped possibly if timing allowed and the mesh ID came back null), but applies a material that displays the world position.
This would allow us to return the picked point optionally with GPU picking.
Would this be efficient? Would just doing the secondary CPU pick be better at that point? How far could something like this be taken, for example if its efficient could we then include option passes for uv, and normal?
I was going to try to whip this up myself but then saw there where some things in the GPU Picker class that Ill need to digest before attempting that.
The other way I was thinking of getting the position was changing the shader to use a float texture, getting rid of the unique colors and just using the r channel for id, that way on the shader pass in the g channel we could stick the depth value, then from the camera matrix, and screenXY, then depth we could also determine the world position without doing another pass. That would also leave the blue and alpha channels open to render the spherical coordinates of the normal data.
That would then pack meshId, depth, theta, phi in the shader pass which we could decode really fast into usable data.
Just wanted some input to figure out if this is a waste of time or not.