16-bit vertex data to GPU (quantization?)


Draco and maybe other compression schemes are supported but is there a way to actually send 16-bit vertex data all the way to GPU as well? Or does Babylon js always convert to 32bit first?

You can use 16 bits data, even 8 bits:


However, Babylon.js shaders are expecting data (position, uv, normals, …) to be float data, so there will be some automatic conversions (by the WebGL layer in the browser) if the inputs are not floats.

Also, indices will be 16 bits if there are less than 65536, 32 bits otherwise.

Thanks! I will experiment later today. I am still looking for confirmation to this: If my vertex position data is quantized to 16bits, will Babylon use 2x less gpu memory compared to 32bit position data.

I do think so, as the GPU buffers are created by Babylon.js, except if ANGLE is creating new float buffers that it does not create when positions are floats in the first place (I don’t think so, however)…

Great! This is very good to optimize memory usage in mobile.

(I am still struggling a bit to find it from source code to confirm this…)