I have a performance issue. I’m creating a simple human anatomy simulation using WebXR. The first time I made this simulation, I used a 3d model with a size of about 40MB. It impacts my WebXR load time, and when I run WebXR on my oculus quest, my browser crashes. Then I tried to separate parts (eg, digestive system, respiratory system, etc.) into several separate models. Then I tried to load all models asynchronously using the BabylonJS promise. As a result, WebXR’s load time performance is better than the previous version (which still uses a 40MB size model). But when I run on my oculus quest and enter WebXR mode, it is like a game lag, stuck but still moving (I hope you all understand what I mean). My question is, how to increase the load time and performance of my WebXR? Do I have to compress my 3d model size? Or are there any other suggestions? Thank you
Note: I include my WebXR playground below and I put this code on my web https://beepower.id:8013/anatomy.
When I hear compression, I am thinking file size reduction. That is not going to do anything for render speed. The original quest is pretty low powered.
I have seen processor data that number of triangles need to be under 200k for sure.
You can tinker around the edges, but if you have the . blend or .max files, then editing down the # of vertices & regenerating your export file will surely help.
If not, reducing your effective screen resolution through hardware scaling might help.
Not a suggestion regarding load time but a UX suggestion - only allow entering XR when your models were already loaded. Show a loading screen until it is ready, this way the user will know that something is loading in the background and that they should wait.
Thank you, I’ve reduced the resolution and number of polygons and vertices in my model, and that’s enough to optimize my webxr render time and performance.
Yes, Thank you for the suggestion. I will add a loading screen to my WebXR
I suggested both as I did not know that you had access to the file prior to the export file. Now that I know that you do, you might be able to get an even better result, by bringing down the triangle count more and NOT doing hardware scaling.
Setting hardware scaling is going to give you a “guaranteed” pixelation / blurryness, like what you get when you blow up now ancient, pixel based fonts. This means a single triangle side, not on the horizontal or vertical, is going to twice jagged.
But webGL geometry, is just like that of modern fonts (except only 2D), so when you print text on a 600 dpi printer, it looks really clear. Setting hardware scaling to 2 is like printing 300 dpi on a 600 dpi printer, okay but not eye popping in comparison.
It also helps that you are without doubt using textures. This allows you to get away with less geometry.
In Blender terms, to reduce geometry, you would first just turn off or down sub division modifiers. After that, you do can limit dissolves, that remove vertices up to a certain angle of difference, which you can increase or decrease. Delimiting by UV is recommended.
Dissolves are done in edit mode, and based on the current selection of vertices, so if it still looks good all but in certain areas, remove those vertices from the selection before doing the dissolve. Your mileage may vary, if not using Blender.