Issues with GLB Model Optimization and Export in Babylon.js

I am developing a web-based 3D editor based on Babylon.js and have encountered a few issues:

Step 1:
I provide an import button, which imports the GLB model and performs optimization and compression using gltf-transform. Below, I will share the specific code.

Step 2:
I modify the models in the scene, such as their position and size. This is the current functionality, and in the future, there will also be material modifications, etc.

Step 3:
I provide a save button. After clicking “Save,” it exports a new GLB using GLTF2Export.GLBAsync.

The steps above outline the process. Below are the issues I am encountering:

  1. The size of the model I imported is 65.8M.
  2. After optimization, the size of the imported model is 10M.
  3. After using GLTF2Export.GLBAsync to save, the size is 180M.
    (I understand why it becomes large, as I have seen similar cases on other forums, so I thought about optimizing it again after export.)
  4. After re-optimizing the exported model, its size is 6M.

Question 1:
Why is the size of the re-optimized model (step 4) smaller than the optimized model (step 2), even though I adjusted the parameters in gltf-transform but could not get the same size?

This issue causes problems when I re-import the model from step 4 back into Babylon.js because content is lost. It undergoes another optimization, and the file gets smaller again after saving, which is not what I want.

I would expect the size from step 2 to step 4 to remain the same, as I didn’t make any changes to the model after importing. Could you please help me identify where I might have gone wrong? I will provide the code below.

Optimization Code (gltf-transform) - Called During Import and Save:

import { PropertyType, WebIO } from '@gltf-transform/core'
import { KHRDracoMeshCompression } from '@gltf-transform/extensions'
import { dedup, flatten, instance, join, palette, prune, draco, resample, simplify, sparse, textureCompress, weld, inspect, meshopt } from '@gltf-transform/functions'
import { MeshoptSimplifier } from 'meshoptimizer'
import { nanoid } from 'nanoid'

export const glTFTransformDraco = async (glbBlob: Blob): Promise<Uint8Array> => {
    const glbArrayBuffer = await glbBlob.arrayBuffer()

    const glbUint8Array = new Uint8Array(glbArrayBuffer)

    const io = new WebIO().registerExtensions([KHRDracoMeshCompression]).registerDependencies({
        'draco3d.encoder': await new window.DracoEncoderModule(),
        'draco3d.decoder': await new window.DracoDecoderModule()
    })

    const document = await io.readBinary(glbUint8Array)

    await document.transform(

        dedup({ propertyTypes: [PropertyType.MESH] }),

        instance({ min: 5 }),

        palette({ min: 5 }),

        flatten(),

        // join({ cleanup: true }),

        weld({}),

        simplify({ simplifier: MeshoptSimplifier, ratio: 0.75, error: 0.001 }),

        resample(),

        prune({
            propertyTypes: [PropertyType.MATERIAL, PropertyType.NODE],
            keepExtras: true
        }),

        sparse({ ratio: 1 / 10 }),

        textureCompress({ targetFormat: 'webp', resize: [1024, 1024] }),

        draco()
    )

    console.log(inspect(document))

    const file = await io.writeBinary(document)

    return file
}

Save Button Code (Export using GLTF2Export.GLBAsync):

const options = {
    shouldExportNode: function (node: any) {
        return !node.name.includes('ShjEditor') && !isCamera(node) && !isLight(node)
    }
}
const glb = await GLTF2Export.GLBAsync(editor.scene, 'test', options)

// (step 4) use
glTFTransformDraco(glb.glTFFiles['test.glb'] as Blob)

By default GLTF2Export.GLBAsync will export GLB with lossless PNG textures. That is why the size is so big.
In your case it is better to Save a file from io.writeBinary(document), i.e. directly from GLTF-Transform.
But I think you may perform all optimization only once, when saving. Or optimize geometry first (it is possible with worker as well) and then optimize textures and compression (lossy operations) when saving.

I wonder how you apply Draco compression. Could you share some insights? :slight_smile:

Here is the example of the tool with similar optimization functionality - https://glb.babylonpress.org/

Does your suggestion imply that I should only optimize the model once? In my tool, users can upload models, make edits, and then save their progress. When they reopen the tool, they should be able to continue editing and save their progress again. This is similar to working on a design in Photoshop, where users save their progress multiple times. Therefore, multiple rounds of compression are inevitable.

Additionally, you mentioned saving the file directly using GLTF-Transform. How would that work? In Babylon.js, I could only find GLTF2Export, which accepts a scene parameter. You can refer to the code I posted above for context.

Thank you for your help!

If you will do it with lossy textures (like browser webp), the result will be more and more blurry textures…
One should be careful with geometry optimization too. If there are some losses or simplification, all of them will go further down the processing chain. And if they will go there repeatedly, the model will be broken at some stage.

Shortly, pass uint8 arrays (GLBs) between Babylon and GLTF-Transform. Let me know if you’ll need more hints :slight_smile:

I understand your point, but after multiple tests, I still cannot achieve the same result as the initial optimized file after saving, even when tweaking the parameters in gltf-transform. For example, when saving, I disabled WebP compression, yet the saved file is either larger or smaller than the imported and optimized version. I believe the sizes should match.

Could you please review the gltf-transform code I provided above? I call it during both the import and save processes. Based on your feedback, I created a separate version for the save process with some parameters omitted, but I still couldn’t achieve the desired result.

How should I handle the saving process to meet my expectations?

GLTF-Transform does indeed accept Uint8Array inputs, but currently, I can only obtain Uint8Array data through GLTF2Export.GLBAsync, correct? Then, I pass this data to GLTF-Transform. Here’s how I implemented it:

const glb = await GLTF2Export.GLBAsync(editor.scene, 'test', options);

// Call my custom GLTF-Transform function
glTFTransformDraco(glb.glTFFiles['test.glb'] as Blob);

// -------------------------------------------------
// glTFTransformDraco function
export const glTFTransformDraco = async (glbBlob: Blob): Promise<Uint8Array> => {
    const glbArrayBuffer = await glbBlob.arrayBuffer()

    const glbUint8Array = new Uint8Array(glbArrayBuffer)

    const io = new WebIO().registerExtensions([KHRDracoMeshCompression]).registerDependencies({
        'draco3d.encoder': await new window.DracoEncoderModule(),
        'draco3d.decoder': await new window.DracoDecoderModule()
    })

    const document = await io.readBinary(glbUint8Array)

    await document.transform(

        dedup({ propertyTypes: [PropertyType.MESH] }),

        instance({ min: 5 }),

        palette({ min: 5 }),

        flatten(),

        // join({ cleanup: true }),

        weld({}),

        simplify({ simplifier: MeshoptSimplifier, ratio: 0.75, error: 0.001 }),

        resample(),

        prune({
            propertyTypes: [PropertyType.MATERIAL, PropertyType.NODE],
            keepExtras: true
        }),

        sparse({ ratio: 1 / 10 }),

        textureCompress({ targetFormat: 'webp', resize: [1024, 1024] }),

        draco()
    )

    console.log(inspect(document))

    const file = await io.writeBinary(document)

    return file
}

You are performing a lot of operations which alter geometry and, in some cases, node hierarchy.
If you will apply this optimizations to any GLB file several times, you may break the geometry completely.
You need to store “clean” GLB version somewhere and apply lossy optimization steps only during the file export (ie before the final usage).

1 Like

Sorry for the late reply. I think I may have found a solution. Instead of saving the GLB file during the save process, I now serialize the scene (based on the Babylon.js Editor source code) and store the data separately. Would this be a good approach?

Sure, this is the valid way.

@labris Excuse me, can a model use Draco compression multiple times?

Be aware that Draco compression is lossy: repeatedly compressing and decompressing a model in a pipeline will lose precision, so compression should generally be the last stage of an art workflow, and uncompressed original files should be kept.

Thank you, I understand now

1 Like