To discus: is it possible to progressive loading of quantized models?

I tried to export quantized models from blender via glTF, and it seemed interesting to me.
If vertexes of our model is a simple table with numbers which we can imagine as a table like this:

010010101010110101101010
100110011010101101010100

And quantization truncates these values to simple short values. In other words, it just saves some of the columns from this table, and others will remove.
In theory, if I’m right, we can split these columns into several files like it does the quantization process. Something like this
“my-model.quant__0-3.bin” — quantized to 3 bits;
“my-model.quant__4-6.bin” — quantized to 6 bits (but without the first 3);
“my-model.quant__7-9.bin” — quantized to 9 bits (but without the first 6).

And we can load new parts to improve the quality of a loaded model, like the loading of LOD but without duplicating — only improvement.

But I don’t feel sure about how quantization works in the box. If it creates a lookup table inside with data normalization, then this idea is a piece of shit :smiley:
glTF/README.md at main · KhronosGroup/glTF · GitHub here I see what the quantization may use normalization.

@bghgary, what do you think?

1 Like

I have never heard of the quantized settings in Blender, so I can’t comment on that part.

This spec is about reducing the size of the data by quantizing the data. It doesn’t have any notion of progressive loading or incrementally increasing quality (like a progressive jpeg). You can perhaps combine this extension and the MSFT_lod extension together to do something meaningful, but this will be discrete LODs.

I’ve seen research papers/projects that do this for meshes, but I’m not aware of any glTF extensions that do this. If you are interested in pursuing something like this, it’s probably best to discuss on the glTF forums or GitHub.

1 Like