hi everyone
I just want to solve one problem.
3D model , anything loading is fast but sometimes i feel terrible it is because it works very slowly in someone else’s environment in a non-local environment, for example on older mobile devices.
my environment :
gen intel i5-11600kf
nvidia geforce rtx 2060
1gb network speed
es5 +webpack npm build and test vscode(live server)
other environment : ex) samsung galaxy note9 or ipone 11
The result of how to fast loading is: glb → draco gltf & texture → ktx2
However, the tests I checked gave the opposite result.
model and texture
glb : 241mb
draco gltf : 70mb
png : 11mb
ktx2 : 1mb
i check loading (dev tools - network )
num / model +texture / loading(model down + png/ktx2 down) / memories
: Assuming the situation is fast network , it delayed loading just only downloading
: Similar to the result of 1
3 : After loading, there was a very long delay
4 : Similar to the result of 3
so I think the result very slow net speed and very good computer or device : 3 , 4
In the opposite case it is 1 ,2 and glb’s size or texture 's size reduce or split
Am I doing something wrong or is there something I overlooked?
Hi~
draco is a way to decrease file memory size like zip.
when you load a gltf: program load a big file→bbl parsing it and create TransformNode balabala…
when you load a gltf-draco: program load a small file→bbl decode it into gltf→bbl parsing it and create TransformNode balabala…
If you work your program in old machine, decoding time maybe longer than you download memory size which draco reduce.
hi @Moriy
thanks for the quick reply .
as you say decrease in memory and additional amount of computation are inversely proportional
but what i want is It was a more efficient and fast loading method.
I thought the uncompressed format had fast computations instead of having a lot of memory.
Conversely, it will also.
but I was wondering if this was the wrong way or approach, seeing that decoding the compressed form was slow in a decent environment.
Your model is too large. I don’t know why you need so many vertices. Let me take a wild guess. it don’t use instances. I think the most feasible solution is to optimize your model. Replace meshes with instances whenever possible. Another question is whether you really need to load all assets at once ?
This topic should be helpful.
thanks for the reply.
An extreme environment was set up for testing. It actually uses between 1MB and 20MB.
And also use instance environment and clones.
agree. with you
We are considering doing a gradual loading. as said in the following :
If there is one disappointment, it seems that you will have a bad experience as a user. Since it’s mostly a project based on a fairly wide field of view environment, all I can think of now is to limit the UI in the beginning for progressive loading, or make sure the first view does look at the ground.
You might consider trying gltfpack | meshoptimizer (without the meshopt part) instead of Draco for the geometry. Draco decompression vs download is a trade-off between download size and decompression speed, not to mention the size of the Draco decoder itself.
The main difference with gltfpack without meshopt is that it just quantized the geometry using lower precision attributes. This will not require an additional decoder library, reduces the download size, and reduces GPU memory usage when loaded.
You can also try gltfpack with meshopt to see how that compares with Draco, but this will have similar issues with having to decode with an external WASM library, etc.
I take much the same approach as gltfpack with the JSON / text based Blender exporter. I have seen people take down their geometry to the point where you would need a microscope to pick out triangles, then throw a 4k image texture over it. I give the exporter options to limit the number of digits, based on geo type (position, normals, etc). GZip also works on text files for transmission reduction.
Artist who post their models want them to look great obviously, but really do not think about where their models are going. This is actually a good thing. It is much harder to inject detail, than remove it. Depending on what format you got the stuff in (unless you built it yourself), your upstream options can vary greatly. Upstream is where you can exponential “true” reduction, not just transmission reduction.
I am partial to the .blend format, as the best for interchange IMO. Yes, it requires Blender (its free though), but artists have a tendency to ramp up sub-division using mesh modifiers to where you can no longer see any benefit, even zoomed in.
What this means is without being a Blender “expert”, you can just go into the modifiers of each mesh, & see if it has a subdivision modifier. If so, keep reducing the Levels Viewport value till you get a result you cannot live with, then add one back. If you get to 0, then delete the modifier.
This is an extremely powerful modifier, but comes with exponential increases in triangle counts. Fixing this here, on a mesh by mesh basis prior to export seems preferable to trying to correct this after the fact with a comparatively blunt optimizer program. I have gone from 1+ million triangles to 100k just from this. Only the artist would be aware, maybe.