How to transfert indices and verticesData to worker with the less copy involved

Hello,
I’m actually finalizing the draco compression feature.
Everything working fine, except I’m not very happy with the way I transfert the data from the source to the underlying worker (we use worker to initialize the wasm draco encoder and to perform the encoding).
The idea is to transfert all the Array (indices, positions, uvsX, colors) as views backed by a buffer, and use the Transferable type of the buffer to avoid copy.
for the purpose i build a small utility private class (in test here, so not finalized) where, I instanciate a new buffer and bind the views on it.

class VerticesDataTransferable {

    public static from(input: IGetSetVerticesData) {

        const target = new VerticesDataTransferable();
        
        const indices = input.getIndices();
        const il = indices ? indices.length : 0;
        
        const positions = input.getVerticesData(VertexBuffer.PositionKind);
        const pl = positions ? positions.length : 0;
        
        const normals = input.getVerticesData(VertexBuffer.NormalKind);
        const nl = normals ? normals.length : 0;
        
        const uvs = input.getVerticesData(VertexBuffer.UVKind);
        const uvl = uvs ? uvs.length : 0;
       
        const byteSize = (il + pl + nl + uvl) * 4 ;
        target.buffer = new ArrayBuffer(byteSize);
        let offsetBytes = 0;
        if( indices){
            target.indices = new Uint32Array(target.buffer, 0, il );
            target.indices.set(indices);
            offsetBytes += (il * 4);
        }
        if( positions){
            target.positions = new Float32Array(target.buffer, offsetBytes, pl );
            target.positions.set(positions);
            offsetBytes += (pl * 4);
        }
        if( normals){
            target.normals = new Float32Array(target.buffer, offsetBytes, nl );
            target.normals.set(normals);
            offsetBytes += (nl * 4);
        }
        if( uvs){
            target.uvs = new Float32Array(target.buffer, offsetBytes, uvl );
            target.uvs.set(uvs);
            offsetBytes += (uvl * 4);
        }
        target.uvs2 = null;
        target.uvs3 = null;
        target.uvs4 = null;
        target.uvs5 = null;
        target.uvs6 = null;
        target.colors = null;
        
        return target;
    }

    buffer : ArrayBuffer;
    positions: Nullable<Float32Array>;
    indices: Nullable<Uint32Array>;
    normals: Nullable<Float32Array>;
    uvs: Nullable<Float32Array>;
    uvs2: Nullable<Float32Array>;
    uvs3: Nullable<Float32Array>;
    uvs4: Nullable<Float32Array>;
    uvs5: Nullable<Float32Array>;
    uvs6: Nullable<Float32Array>;
    colors: Nullable<Float32Array>;
}  

then i pass it to the worker using

const inputCopy = VerticesDataTransferable.from(input);
worker.postMessage({ id: "encodeMesh", verticesData: inputCopy, options: options }, [inputCopy.buffer]);

Nothing very rocket science …
What i’m not happy with, is that i have to instanciate this buffer, so double the memory.… (and usually if we want to compress a mesh, it’s eventually because its already huge).

I guess this buffer already exist into the underlying memory of the mesh, but I did not dig so far into the source to know about it.
In other hand, as inside a worker, owning a copy of the data is quite a good thing while we canot lock the buffer against changes. Additionally i re-use a part of the same buffer for the encoded data.

so maybe using a worker to perform the encode is not a good thing?

My idea is to add another parameter to tell the codec we use the worker or not, then being able to choose between multi-threading or memory load.

public encodeMeshAsync(input: IGetSetVerticesData, options: IDracoEncoderOptions, avoidWorker:boolean = false): Promise<Nullable<IDracoEncodedPrimitive>> {...}

Usually, when exporting GLTF, we do not care about multi-thread, because it often a last mile action, when avoiding memory copy can leave us with the ability to compress bigger mesh…

Any thoughts ?? best strategy ??

1 Like

cc @Deltakosh @bghgary @sebavan

I’m not very familiar with workers, but I don’t think there’s a way around this, since if I’m not mistaken, the worker can’t look into the main thread’s memory :thinking:

Having the option of using the worker or not is interesting, since as you’ve already said, not using it would allow compressing a bigger mesh, but using it could be useful if we want to continue rendering on the main thread. Can we start multiple workers in parallel if the user wants to compress multiple meshes?

1 Like

I would think the copy is fine in most cases here as the data layout is pretty different. In meshes you would have one buffer per attribute not just one big buffer.

I agree it is a bit of a waste but I am afraid of sharing if it breaks the rendering side.

@bghgary any thoughts ?

1 Like

Actually the Draco compression work well with this copy solution, so i keep it.
Next is to add KHR_draco_mesh_compression, which might be the easy part…
@bghgary do you want me to make an interim PR for DracoCompression only ?

Copying is probably unavoidable here. I can look at it more carefully once your PR is ready.

1 Like