Performance in AR using GLTF models?

I made a simple AR app that uses hit tests to place a GLTF R2D2 on hit test result, but I don’t know if it’s the device I’m using (2017 LG G6) or the model is too “heavy” but the performance just seems bad. It’s laggy and it only gets a little bit better when the model is finally placed. Maybe I shouldn’t do so many hit tests? I would appreciate if someone can test it on their device and let me know what the performance is like.

The program will look for where to place the robot, and after you tap the red sphere the robot should stay at that spot, and then the red sphere plays/stops the animation.

https://playground.babylonjs.com/#XWBES1#38

Don’t forget those mobile devices are limited even without entering AR. Now add environment recognition, image processing, constant hit testing AND rendering a mesh and you have an interesting mixture of performance reducers :slight_smile:

What you can do is stop hit testing when the mesh is displayed, and only start hit-testing again if the user decided to remove the mesh from the scene. Or simplify the mesh to fit the needs of the device if FPS is reduced

Oh, and I see it very often, which always makes me wonder if we have a mistake in our demo or explanation - unless you changed the default behavior (which you didn’t in your scene), there is no reason to pick again when a pointer event was triggered. You have a pick result already in the pointerInfo object received by the observable - pointerInfo.pickInfo