Loading hitch when loading large assets at runtime?

Il take a look when i get home

1 Like

Nice, that confirms power of 2 is significant for reasonable size textures, 300ms to 180ms, down to 49ms . that pixlr online image editor is super convenient, glad i found it. i put it on my bookmarks bar.

somewhat unrelated, but i also found this open source catalog of super high quality materials from amd . .MaterialX Library

ok put me to work, what did u want me to look at? btw check this out, try running it locally. preloads a bunch of textures and sounds using a service worker, with a gui to delete the cache for testing.
.GitHub - jeremy-coleman/vite-sw-prefetch-cacheness. I have an idea, but we need a way to create file manifests for github repos. Any clever ideas that won’t run into a rate limit, require any auth tokens, or require downloading a repo?

1 Like

I was already sleeping but had nightmares… The matrix is playing with me. So, the transcoder you’ve linked here is not working for me. Any ideas?

Sir yer sir! :sunglasses:

EDIT: Cool stuff!!

hmm, i think the problem is probably needing to initialize the module first, but, i actually think that is an outdated version. There has been some bugs in the draco compression libs, but idk much about that. Or the relationship between draco, gltf, and basis, but they were talking about it here.

.Consider adding gzip or brotli compression for Draco and Basis WASM binaries

labris linked this one , probably better than anything else.

1 Like

Sorry, I don’t get this. Rate limit, tokens, repo downloading? For what reason?

to get a list of files for a service worker to prefetch

Unfortunatelly still:
image

The design is awesome! Now digging the code.

Yea it feels really good. It’s a very bespoke implementation though. The asset list is in that json file, but they must have manually created offline somehow.

This may be dependent on how much pixels must the engine replace/stretch/whatever because my 8191x8191 and 8192x8292 textures are loading at the same speed.

I suspect its because they’re so big the rate limit becomes something else not related to the gpu loading speed. So the GPU bit is still probably slower, its just the majority of the time is spent somewhere else. Or maybe it chunks it into power of 2 blocks and only the final blocks are slow, idk. but definitely on reasonably sized textures it’s pretty significant. 8k textures are absurd lol

oh by big i mean resolution and file size, not layout size

Can you tell me please more about the whole scenario? Maybe I am looking dumb but still don’t get what’s the issue here :joy:

Let me try it and convince myself :joy:

Maybe nowadays. But I will make them smaller and test with them. I had a colleauge (is this from French? eau lol) a 3d artist, he was hired because he did some pretty cool renders and that he did a lot of game assets as well. He was working on one of our building’s 3d model and delivered a 1.2GB blender file, 300 MB glb / floor. He used 4k textures everywhere. A table? 4k. A chair 4k, even a computer mouse had 4k textures. The funny part was that the textures had actual pixels only in the top left corner covering maybe 5% of the whole texture area. He told me later that he was thinking even about 8k. So we definitely need 8k lol

so if you look at the screenshot nick posted above, for #1 he had 300ms vs #2 160ms.
the 160ms file is LARGER in file size at 16mb , yet 2x faster to load because its a power of 2.
so without a doubt, power of 2 matters

next comparison is for #3 square vs #4 rectangle. i took the same file and lowered resolution to make the files the same file size for the data, they’re both 5mb. #3 is a rectangle with lower resolution and #4 is a square at higher resolution. the square one loaded in 105ms vs the rectangle at 130ms. the article i linked above said square textures were faster to load, and i wanted to see if that is true, and it seems to be.

But, why you’re not seeing any difference is because your textures are fucking huge, like 150mb. it has to schedule a lot of shit in the background, encrypt and decrypt too because https. lots of stuff going on that will make it slow no matter what, so you can’t really measure loading gpu loading time. So, if you had some scene where you needed to load 20 small textures or wanted to stream textures, having them as a power of 2 and square can seemingly make them load a lot faster, if they are small in mb size (like 15mb or less probably)

I think they keep the super high res textures to bake shadow maps and normal maps or whatever other tricks there are, but then you ship low res textures on the model itself, and use the high res maps to somehow make it look better. i don’t know shit about that stuff though, probably @labris does

1 Like

I get your point and I will try the smaller ones. I do believe that there must be some differences because there is extra work to align the texture to POT but I am not yet convinced that the differences are so big.

This doesn’t apply, because the timer starts after the data are already in an arraybuffer. However there is a texture init observer on the engine object. I’ll try to hook the timer start there. (writing from phone, don’t remember the exact observer name)

Pouring some Irish whisky and enjoying a cigarette. 3:26 AM lol

Would be interesting to know KTX timings

Hey @labris! Welcome buddy!
I’ve created a PG but I was not able to load the KTX texture from the array buffer so I couldn’t start the timer exactly when the texture binding starts. Any ideas to do it before I start to google?