Meshopt decoder: increase concurrency

Currently in MeshoptCompression off-thread decoding is used, and the worker count is fixed 1.

Since multithreading already supported, any intent to use navigator.hardwareConcurrency or navigator.hardwareConcurrency - 1 when available instead of 1 to get maximum decoding performance?

1 Like

cc @bghgary and @alexchuber

If we trace the history of this change, it came from here: Fix memory leak in MeshoptCompression by OrigamiDev-Pete · Pull Request #14995 · BabylonJS/Babylon.js · GitHub

Which came from: MeshoptDecoder Memory Leak - Bugs - Babylon.js

Which came from: wasm: expose methods to dispose or recreate wasm instances · Issue #522 · zeux/meshoptimizer · GitHub which is posted by you @kzhsw :smiley:

Full circle, I guess. Do you have a suggestion on how to use more threads and not have a memory leak?

1 Like

For thread count, both navigator.hardwareConcurrency and Math.max(1, navigator.hardwareConcurrency - 1), later one keeps a thread for main content thread.
For cleaning, make something like a queue, or a count down to wait all concurrent tasks to complete before terminating all threads.

let ActiveTasks = 0;

    public async decodeGltfBufferAsync(source: Uint8Array, count: number, stride: number, mode: "ATTRIBUTES" | "TRIANGLES" | "INDICES", filter?: string): Promise<Uint8Array> {
        await this._decoderModulePromise!;
        if (NumberOfWorkers === 0) {
            MeshoptDecoder.useWorkers(Math.max(1, typeof navigator === "object" && Number.isFinite(navigator.hardwareConcurrency) ? navigator.hardwareConcurrency : 1));
            NumberOfWorkers = 1;
        }
        let result: Uint8Array;
        ActiveTasks++;
        try {
            result = await MeshoptDecoder.decodeGltfBufferAsync(count, stride, source, mode, filter);
        } finally {
            ActiveTasks--;
            // a simple debounce to avoid switching back and forth between workers and no workers while decoding
            if (WorkerTimeout !== null) {
                clearTimeout(WorkerTimeout);
            }
            WorkerTimeout = setTimeout(() => {
                if (ActiveTasks === 0) {
                    MeshoptDecoder.useWorkers(0);
                    NumberOfWorkers = 0;
                }// else another timeout would be set after async task completed
                WorkerTimeout = null;
            }, 1000);
        }
        return result;
    }

Any news on this?

Sorry, I saw this at the time but didn’t have a response then.

I think maybe we can use AutoReleaseWorkerPool also to avoid the memory issue.

I would be careful using navigator.hardwareConcurrency directly. We have done this in the past and caused out-of-memory issues because this value is very high on some systems. We can probably factor out this function to get a reasonable value.

Do you want to give this a shot?

Well, the thing is, meshopt manages its own worker pool, and worker creation and disposion is not exposed.