Hello,
I’m actually finalizing the draco compression feature.
Everything working fine, except I’m not very happy with the way I transfert the data from the source to the underlying worker (we use worker to initialize the wasm draco encoder and to perform the encoding).
The idea is to transfert all the Array (indices, positions, uvsX, colors) as views backed by a buffer, and use the Transferable type of the buffer to avoid copy.
for the purpose i build a small utility private class (in test here, so not finalized) where, I instanciate a new buffer and bind the views on it.
class VerticesDataTransferable {
public static from(input: IGetSetVerticesData) {
const target = new VerticesDataTransferable();
const indices = input.getIndices();
const il = indices ? indices.length : 0;
const positions = input.getVerticesData(VertexBuffer.PositionKind);
const pl = positions ? positions.length : 0;
const normals = input.getVerticesData(VertexBuffer.NormalKind);
const nl = normals ? normals.length : 0;
const uvs = input.getVerticesData(VertexBuffer.UVKind);
const uvl = uvs ? uvs.length : 0;
const byteSize = (il + pl + nl + uvl) * 4 ;
target.buffer = new ArrayBuffer(byteSize);
let offsetBytes = 0;
if( indices){
target.indices = new Uint32Array(target.buffer, 0, il );
target.indices.set(indices);
offsetBytes += (il * 4);
}
if( positions){
target.positions = new Float32Array(target.buffer, offsetBytes, pl );
target.positions.set(positions);
offsetBytes += (pl * 4);
}
if( normals){
target.normals = new Float32Array(target.buffer, offsetBytes, nl );
target.normals.set(normals);
offsetBytes += (nl * 4);
}
if( uvs){
target.uvs = new Float32Array(target.buffer, offsetBytes, uvl );
target.uvs.set(uvs);
offsetBytes += (uvl * 4);
}
target.uvs2 = null;
target.uvs3 = null;
target.uvs4 = null;
target.uvs5 = null;
target.uvs6 = null;
target.colors = null;
return target;
}
buffer : ArrayBuffer;
positions: Nullable<Float32Array>;
indices: Nullable<Uint32Array>;
normals: Nullable<Float32Array>;
uvs: Nullable<Float32Array>;
uvs2: Nullable<Float32Array>;
uvs3: Nullable<Float32Array>;
uvs4: Nullable<Float32Array>;
uvs5: Nullable<Float32Array>;
uvs6: Nullable<Float32Array>;
colors: Nullable<Float32Array>;
}
then i pass it to the worker using
const inputCopy = VerticesDataTransferable.from(input);
worker.postMessage({ id: "encodeMesh", verticesData: inputCopy, options: options }, [inputCopy.buffer]);
Nothing very rocket science …
What i’m not happy with, is that i have to instanciate this buffer, so double the memory.… (and usually if we want to compress a mesh, it’s eventually because its already huge).
I guess this buffer already exist into the underlying memory of the mesh, but I did not dig so far into the source to know about it.
In other hand, as inside a worker, owning a copy of the data is quite a good thing while we canot lock the buffer against changes. Additionally i re-use a part of the same buffer for the encoded data.
so maybe using a worker to perform the encode is not a good thing?
My idea is to add another parameter to tell the codec we use the worker or not, then being able to choose between multi-threading or memory load.
public encodeMeshAsync(input: IGetSetVerticesData, options: IDracoEncoderOptions, avoidWorker:boolean = false): Promise<Nullable<IDracoEncodedPrimitive>> {...}
Usually, when exporting GLTF, we do not care about multi-thread, because it often a last mile action, when avoiding memory copy can leave us with the ability to compress bigger mesh…
Any thoughts ?? best strategy ??