Hello, I am on a journey now optimizing for a web game I’m making, there are many steps to get entities scaling up and I’m trying to tick the boxes where it will count.
My solution for entities en masse that share the same model but varying textures (a plate breastplate vs chain, a polar bear texture vs a brown bear) was using asset container + mesh instances + single ShaderMaterial + VAT + contiguous texture atlas via basisu
with a 2darray texture type. I tried atlasing with uv scale and offset and had mixed results, and things are working nicely with RawTexture2DArray.
I noticed when i was packing these atlases with raw RGB(A) data, they got pretty big, like 5mb-10mb, and that’s all uploaded to the GPU. The interface around the raw textures via RawTexture2DArray doesn’t expose compression via configuration, and the config around raw textures that does take in compression is keying into the engine’s s3tc capabilities… In my particular case I am transcoding to astc.
My question is: Are there current plans on providing high level support for compressed RawTexture2DArray? If not, should there be?
I got something working that works in WebGL2, I need to modify the shader code for WebGPU, although that’s not specific to the texture compression here anyway. I am new to a lot of this so really just hacking it in where I can.
The storage on disk was reduced significantly when storing raw 2darray basis data rather than rgba (6mb down to 150kb) and when expanded in memory all the slices totaled around 1.6mb rather than 6mb.
I wanted to share it and see what people thought. 500 entities with different textures, animations, secondary meshes, all sharing one material.
3 Likes
Side question: has anyone experimented with compressing VAT textures? Those things can get huge!
cc @alexchuber
I’d love to see this feature in the engine honestly
Would you be open to do a PR @knervous ? Maybe with Alex’s help
1 Like
After getting some results across different GPUs it seems like even when mapping engine capabilities to the basis compression type there ends up with some bad sampling, so it seems like it’s not a given that this could work right out of the box… Also in a bit over my head here maybe
I will continue tinkering with it but don’t know if I have the background to know all the gotchas in this setup. If I take another pass and can get consistent results I will circle back and see if it fits in the engine.
2 Likes
Wow, this looks awesome! That’s a significant difference in memory.
As you’ve probably noticed, we don’t currently support 2D texture arrays in our basis, ktx, and dds loaders-- only 2D and cube textures are handled. I believe these loaders actually predate, or nearly predate, texture 2D array support in major browsers.
A logical first step might be expanding our texture loaders to handle 2D texture arrays, similar to how cube textures are currently supported. Understandably, that’s not a trivial task, so maybe just updating RawTexture2DArray to expose compression parameters would be a better start.
Also, you mentioned how the RawTexture2DArray methods in the engine assume s3tc compression by default. That looks like a bug, since there’s no real reason (as far as I know) to lock it to that format.
As for the bad sampling… out of curiosity, are you copying the compression type mappings from our basis or KTX loaders?
Love the setup you’re using. You’ve got a real neat combo of optimizations. Thanks for sharing all the details and results!
1 Like
Thanks! Here’s an updated PG with some preflight config for the basis transcoder to tell it which formats to try to transcode to…
Not super familiar with the API iirc those params are just passed into the compiled wasm, but if there’s a format enabled true and it comes back with a success it will yield all that data in that format, as well as the basis format. From there I’m using the tools in the engine glInternalFormat = BABYLON.GetInternalFormatFromBasisFormat(basisFormat, engine);
I don’t know if this logic is already in the engine somewhere for checking what we should potentially be asking for
if (isWebGL) {
const gl = engine._gl;
supportedCompressionFormats.etc1 = !!gl.getExtension('WEBGL_compressed_texture_etc1');
supportedCompressionFormats.s3tc = !!gl.getExtension('WEBGL_compressed_texture_s3tc');
supportedCompressionFormats.pvrtc = !!gl.getExtension('WEBGL_compressed_texture_pvrtc');
supportedCompressionFormats.etc2 = !!gl.getExtension('WEBGL_compressed_texture_etc');
supportedCompressionFormats.astc = !!gl.getExtension('WEBGL_compressed_texture_astc');
supportedCompressionFormats.bc7 = !!gl.getExtension('EXT_texture_compression_bptc');
} else if (isWebGPU) {
// WebGPU: Check supported texture formats
const adapter = await engine._device;
const features = await adapter.features; // Hypothetical; WebGPU API may vary
supportedCompressionFormats.etc1 = false; // ETC1 not typically supported in WebGPU
supportedCompressionFormats.s3tc = features.has('texture-compression-bc'); // Includes BC7
supportedCompressionFormats.pvrtc = false; // PVRTC not typically supported
supportedCompressionFormats.etc2 = features.has('texture-compression-etc2');
supportedCompressionFormats.astc = features.has('texture-compression-astc');
supportedCompressionFormats.bc7 = features.has('texture-compression-bc');
}
I’d be happy to take a look sometime at some small PRs around compression on RawTexture2DArray and RawTexture, might be a bit til I get around to it. Busy baking in this solution to my web game now - if anyone is interested, it’s EverQuest in the browser, yeehaw 8) https://eqrequiem.com
1 Like