So i just want to know, which, the glb format or the glTF-Format, is better for file size and performance/loading times?
On the one hand i was reading the docs, and i thougth glb.-format adds file size and is slower?:
"Hence, loading glTF files usually requires either separate requests to fetch all binary data, or extra space due to base64-encoding. Base64-encoding requires extra processing to decode and increases the file size (by ~33% for encoded resources). " glTF™ 2.0 Specification
Then i came across a question on three.js where someone said glb is most efficient:
But also i found a study from 2019, where glTF was faster in performance:
So what’s the case now? Which is smaller/bigger and which tends to be faster/slower?
And when it is like it’s written in the docs, that binary is slower and bigger…
What advantages does the glb-format have?
I use embedded js not a data file for geometry, but one thing that should be kept in mind is that although base64 is bigger, it is a text format.
As such it can be GZipped by the server & unpacked by the browser automatically, thus reducing some of the extra file size of text over binary. Said another way, differences in transmission size cannot always be known by directly comparing file sizes.
GZip needs to enabled for that file extension on the web server, but you might not have control of the web server, or know how. Am thinking that it can also be done at the client side with the XMLHttpRequest. Babylon format is basically JSON. Wonder if putting this in the request for the .BABYLON files might help in BJS?
const xhr = new XMLHttpRequest();
xhr.open("GET", url, true);
xhr.overrideMimeType('application/json');
There isn’t a right answer which one is better. It depends on the situation.
Ways to store glTF on a server
Loose glTF files contains:
The glTF JSON file
Zero or more .bin files (for geometry / animations)
Zero or more image files (for textures)
Self-contained GLB:
The GLB includes everything from the loose glTF files in one file without compression.
GLB with external references:
There are multiple ways to configure this, but one possible usage is to put the geometry with the GLB but not the image files.
glTF with Base64 embedded binary.
Avoid this since it is both bigger and slower to encode/decode. Sorry @JCPalmer
Servers can be configured to compress files that are being served. Compressing files that are already compressed (e.g., .png or .jpg) is typically not a good idea because compressing these files do little to reduce the payload and decompressing these files cost time. It’s a lose-lose.
Depending on network conditions, downloading multiple streams in parallel may have faster overall throughput than a single serialized stream. Each network request has overhead. Doing a lot of small requests is usually bad if the payloads are small.
The reason why GLB was originally created is to make it such that the geometry/animation data and the glTF JSON can be combined into one file and thus one request. This payload can also be compressed which saves bandwidth. The image files are often bigger and thus should be uncompressed and downloaded in parallel. This is my general recommendation given no information about the assets.
Nowadays, people use GLB as a self-contained single asset for convenience when working with them, but it is probably not the best layout on a server.
At the end of the day, the only way to know for sure is to measure and see which one works best for your scenario.
Thanks, for the primer on GLB/GLTF. I was just trying to point out the flawed logic of equating file sizes to transmission size.
I recently went up to 300 mbps for my internet service. Faster & faster line speeds definitely affect the transmission size vs local processing equation. It is not developer’s line speed that counts though, but the scene viewer’s speed. CDN is also valuable for reducing network latency.
One area you still did not touch on is Draco compression. I see that it can be both lossy & lossless. Coming from the embedded JS / .babylon decimal world, having more than 3 decimals for a vertex coordinate is a waste most of the time, especially for XR bound geometry. XR is meter based, so 3 decimals is millimeter resolution, 4 is micrometer. Anymore and your headset is gonna need a microscope attachment to see a difference .
For Lossy, Draco only controls the # of bits, not quite as simple as the # of decimals. There is also inflate time for Draco. Not sure how that compares to GZip inflate of JS / .babylon.
Finally, in my multi-scene architecture, I have a concept called “Read Ahead”, where pieces of scenes not yet in existence are downloaded / loaded from browser cache, with the current scene rendering. This combined with the notion of an asset being able to be created in different scenes changes the equation once again.
I am only applying this to JS files right now, but there is a way to separate loading into memory from creating an actual asset, right?
Draco and also MeshOpt both reduce the payload of the geometry/animation, but it shouldn’t make a difference between GLTF and GLB (ignoring the base64 embedded version).
EDIT: Actually, I’m not sure what the implications of gzipping Draco or MeshOpt compressed data. If the GLB will be served compressed, that might be a consideration.