Pointer interaction performance in XR context

Hi everyone,

I am building an XR app and I have a couple of home made meshes that are fairly simple, I wish to be able load the more realistic corresponding mesh onEvent that evolves the simple mesh
e.g :
I have a simple mesh that represents a sport car, I select it with my pointer and then I load the more realistic model of the car.

Here is the deal, I am running my app on meta quest 3 and I feel like I have some performance issues when the pointer is raycasting with my simple mesh. I decided to create an invisible bounding box around my simple car mesh so that every primary interaction (grab, move, rotate, scale) is done with the bounding box. I also set the isGrabbable, isPickable [… ] flags of my simple mesh to false and set the isPickable, isGrabbable […] flags to true on my bounding box. The thing is that I can’t interact with the bounding box. From what I understand if one of the following flag isPickable, IsVisible or isRendered is false then you can’t interact with the mesh using a hand input / gamepad pointer.

My question is there any way to tell the pointer to only interact with some specific Meshes/abstractMeshes event if these mesh are not rendered ?

Do you have a Playground you can share?

I can’t quite tell if this is XR specific. @RaananW?

1 Like

not directly related to XR, except for the fact that in XR the onPointerMove is running on each frame per default (i.e. updating the “mesh under pointer”). If you have a lot of complex meshes in the scene, this might be critical to your performance. Making sure you only run picking with a single hand will help for sure (this is the default behavior).

A reproduction would be great. Will be great to understand what exactly you are trying to achieve. Just a note - our picking algo already takes bounding boxes into account. meaning, if the ray does not collide with the bounding box of a specific mesh, it will not be inspected.

Hi,
Thanks for your answer, I won’t be able to provide a playground because the meshes are use are coming from a rest api and there would be alot to do to provide something pretty close to my context but here is how I chosed to overcome my performance issue (I hope I will be able to share my steps clearly)

  1. I created a box around the simplified version of my mesh. There is a playground in the documentation that shows how to create a box that is the exact same dimension as an object bounding box,
  2. I set this box as parent of my mesh
  3. I added a simple material to my box
  4. I set the alpha value to a very low value (like 0.1)
    Why did I do that ?
    The simple version of my mesh has more than 50 000 faces and the meta quest 3 seemed to struggle to check intersections when my hand / controller was hovering it.
    Doing it this way allows me to interact with the box and to never enter in the mesh’s bounding box so the headset never has to do the computations to check if I raycast my mesh.

What do you guys think about my optimisation ?

If it fits your use case, this is a perfect solution! The way I see it - any UX solution that works well with the users of your application is a very acceptable solution.
Downside with this kind of implementation is that some objects are not covered so nicely with a box. a Convex Hull might fit a bit better, but it really depends on the models and your needs.

1 Like