I’ve searched through documentation, but can’t find any mention of a way to make a point cloud pickable. The best workaround I can come up with would be to refactor to use a solid particle system, but I would rather avoid that, as my application can have many thousands of points, and would like to keep it as light weight as possible.
Simply setting the PCS’ mesh to be pickable (or near pickable) doesn’t seem to work.
Checking for intersection between the position clicked on the screen and the PCS would be a bit taxing on ressources, we would have to sort the points to get the frontmost point. We would also have to account for the point size, if it’s not 1. Checking against the bounding box is already possible:
Thanks for the response. Unfortunately, in my use case, picking on the bounding box doesn’t help me. I’m basically making data graphs in 3d, and need point information when you click on a point.
I’m not sure I understand why the performance hit would be so bad. Why can it not use the raytracing already used for hit detection with meshes, using the meshes of the drawn sprites?
Or are my assumptions about how the PCS is drawn incorrect?
I guess I will need to write my own code to detect clicking on points.
The existing hit detection code is based on triangles. Here, you’ll need to loop over the points, project them onto the 2D screen and compare the coordinates with the mouse position. If you want the real point and not one that has the same projection but is further away, you’ll need to either :
- sort the points from front to back (using the z coordinate in camera view space). In this case, as soon as you find a projected point that corresponds to the picked location, you can stop there.
- loop over all the points and obtain all the points whose projection corresponds to the picked location, then keep only the point closest to the view.
I’m not sure which method would be quicker…