Hi guys.
I have a project which I really want to do. It is NOT commercial at all
I want to build a scene made out of lego-like bricks and then in VR (on oculus quest 2) I want to stand inside that scene.
Lego buildings tend to have a lot of bricks and things like the studs (the round knobs on top of them) also use a lot of geometry compared to simple boxes.
But… people can be clever and hide any geometry that is not visible. Or… instead of hide it, just don’t generate it at all.
In unity this is doable and gives really nice performance. But… babylon is on the web(gl) and using javascript. My scene can easily have a couple of million vertices, but the object will not be using a huge number of drawcalls since I will generate a lot of bricks in 1 and the same mesh (to reduce drawcalls, which you actually have to do in unity as well).
Are there tricks I could use to display a lof of polygons on for example a quest2 hardware ? Maybe there are ways to have less intensive shaders ? Things to avoid to optimise speed? All my blocks are static in the scene, so maybe there are tricks to be sure all this geometry is staying on the GPU?
Kind regards,
Bart
PS. Since this is a non-commercial project (and hey I’ll be using lego brick models, so there is NO way anything commercial could be done with these) I will probably be the only one to run these demo’s.
In unity you can also choose to run the app on your PC/GPU and then send them over via cable. I guess that that performance will be unbeatable when I would run babylonJS on the oculus quest 2 hardware ?