Serving draco decoder locally and hosted (with webpack)

Hi.

As you may know, the KHR_draco_mesh_compression is hard-coded to use a ‘default decoder’, and the default decoder is soft-coded to load files from https://cdn.babylonjs.com which is unfortunately not always availble…

Here are tips to make it available locally and as assets from your webpacked webapp.

Most likely, you already have a draco decoders installed along with the Babylonjs: node_modules/@babylonjs/core/assets/Draco.

The files can be included as assets and installed as the defaults:

import { DracoDecoder } from "@babylonjs/core/Meshes/Compression/dracoDecoder";

const wasmUrl = new URL("@babylonjs/core/assets/Draco/draco_wasm_wrapper_gltf.js", import.meta.url).href;
const wasmBinaryUrl = new URL("@babylonjs/core/assets/Draco/draco_decoder_gltf.wasm", import.meta.url).href;
const fallbackUrl = new URL("@babylonjs/core/assets/Draco/draco_decoder_gltf.js", import.meta.url).href;

DracoDecoder.DefaultConfiguration = { wasmUrl, wasmBinaryUrl, fallbackUrl };

Webpack will copy the referenced files as asset modules somewhere into publicPath and webpack-dev-server will serve them locally.

By default, Webpack would re-minimize the *.js files and mangle filenames with hashes, which doesn’t make much sense.

To prevent filename mangling adjust webpack config :

output: {
    path: path.resolve("./dist"),
    filename: "[name].[contenthash].js",
    assetModuleFilename: "[name][ext]", // <-- keep original filenames of assets
},

To prevent minification, built-in minifier should be replaced by external plugin (seems like a good idea anyway):

optimization: {
    minimize: isproduction,
    minimizer: [new TerserPlugin({
        exclude: [/draco_.*_gltf/] // <-- dont try to re-minimize the stuff
    })],
},

That’s it!

P.S.
For some reason, the draco_wasm_wrapper_gltf.js is requested 4 times every time a compressed file is loaded. But that’s another story.

2 Likes

thanks a lot for sharing this!! @ryantrem / @alexchuber maybe worth adding to the doc?

This is because we create 4 web workers to do the work in parallel.