Meet the GLB Batch Optimizer!
It has even more useful functions than the well-known GLB Optimizer and supports GLB batch processing.
Not fully polished yet, but seems to be working
GLB Batch Optimizer is a web application designed to streamline and automate the optimization of multiple 3D model GLB files in batch. Built for users working with bunches of 3D assets — such as game developers, 3D artists, and digital content creators. The app provides an intuitive interface for uploading, processing, and downloading optimized GLB files.
Key features include:
Batch Upload: Easily upload multiple GLB files at once.
Optimization Settings: Customize optimization parameters to suit your workflow. Settings are saved to localStorage and retrieved when loading the app again.
Fast Processing: Efficiently optimize large numbers of files using advanced algorithms.
Download Optimized Files: Retrieve all optimized assets in a single step.
Optimize multiple 3D model files at once with this powerful batch processing tool!
@labris I have the need within the next 2 weeks to provide a solution to batch “turn off double sided materials” property in GLBs. This would be for end-users where a tool like this is required. It’d be a shame to have to create a new tool.
Can I have this feature request please OR I am happy to do a PR?
An additional request “nice to have” would be saving the config + settings via url param as it would be shareable between users. Again I’m happy to contribute if accepting PRs.
function backfaceCulling(options) {
return (document) => {
for (const material of document.getRoot().listMaterials()) {
material.setDoubleSided(!options.cull);
}
};
}
@labris in all seriousness how many PRs/effort do you want others to go to? I don’t want to interfere too much at this early stage but we all have urges… improve-base branch
Nice idea. It’s pretty damn performant though for hundreds of models (until someone says otherwise for their single edge-case). Personally I think time is best spent on quality of code + useful features. The libraries used in the repo look to be great choices and browsers have come a long way recently so haven’t yet seen issues. I have optimised photogrammetry + PointCloud models with ease.
If for example someone wants to create a model import tool for their game/configurator… they would probably take the batcher as inspiration to implement their own worker. the glb-transform library probably has features that the coder wants to get to for their specific case.
What may be a good effort is a batch wrapper plugin that works directly with Babylonjs, which is a web worker in itself. It would make sense to do it when this repo has matured a bit more.
I like Workers, but I believe that using them in this project at the moment would introduce unnecessary complications. The biggest issue would be available memory when handling large files.
But as the perf of single core (ipc*freq) comes to the bottleneck again, and major vendors switching to more cores again, multithreading is really a quick and cheap option that scales well for modern processors, as gltf-transform being designed for node.js, it should not use too much main thread dom api.
Also, it would much better to multithread in the model, like for each mesh, send to worker, and do weld/reorder/simplify, and await all tasks on main thread, or for each texture, send to worker and convert to ktx there.
That’s why configurable concurrency is needed, user would know if large models going in the pipeline, so it’s user’s choice to trade speed for memory usage.
In case of chrome, which has a 4gb limit per process, but off-heap allocations like ArrayBuffers have a higher limit, which consumes most memory in case of gltf models.
Yeah cli gives user much better control on concurrency, a ls | xargs -n1 -P $(nproc) gltf-transform would just do the trick, but only for advanced users. Also, gltfpack is much more faster in this case (and use less mem it seems), but not the flexiblity of gltf-transform.
Seems gltf-transform cli spawns process to call native libktx execs, so it’s expected to be faster than wasm.