I tried to export quantized models from blender via glTF, and it seemed interesting to me.
If vertexes of our model is a simple table with numbers which we can imagine as a table like this:
And quantization truncates these values to simple short values. In other words, it just saves some of the columns from this table, and others will remove.
In theory, if I’m right, we can split these columns into several files like it does the quantization process. Something like this
“my-model.quant__0-3.bin” — quantized to 3 bits;
“my-model.quant__4-6.bin” — quantized to 6 bits (but without the first 3);
“my-model.quant__7-9.bin” — quantized to 9 bits (but without the first 6).
And we can load new parts to improve the quality of a loaded model, like the loading of LOD but without duplicating — only improvement.
But I don’t feel sure about how quantization works in the box. If it creates a lookup table inside with data normalization, then this idea is a piece of shit
glTF/README.md at main · KhronosGroup/glTF · GitHub here I see what the quantization may use normalization.
@bghgary, what do you think?