File/Texture preloading

There are some topics about that

And the problem is always the same texture loading drops the FPS introducing a lagging experience, I know that the problem comes from javascript as it is single thread.

But what if we could just download the file without actually loading the texture, obviously the time you’ll need to load the texture will be much slower than downloading the actual URL + loading the texture.

Thank you in advance.

you can preload textures (or anything else), and store them as Blobs in your browser. later you can use this blob as input for any type of asset you are loading.

A very naive example:

The fetch can be done beforehand, and the URL for the blob can be stored or created when needed


Unfortunately my intention is to preload (without being applied) several cubemaps and fetching the cube sides seems to have no effect on the performance for the first loading.

prefetch only makes sense if you have the time to fetch in the background and you don’t need the files for your initial display of the scene
It will improve performance if the cubemaps are displayed in a future scene.

That was exactly my case, I want to preload them to avoid waiting when I switch the current cubemap for another one.

From this article Service Workers: an Introduction  |  Web Fundamentals  |  Google Developers

var CACHE_NAME = 'my-site-cache-v1';
var urlsToCache = [

self.addEventListener('install', function(event) {
  // Perform install steps
      .then(function(cache) {
        console.log('Opened cache');
        return cache.addAll(urlsToCache);

Webpack plugins or workbox can help auto generate the file list. Or even easier, use the react-scripts cli

It all depends what is the blocking part. If it is about gpu decoding for pngs for instance, it could be moved on a worker thread and shared back as an imageBitmap.

About heavy cpu process for hdr and such, you could think of moving to a worker the decode part and use a sharedArrayBuffer to share the result back (this would not work so far on every browser)

If the gpu transfer is the slow part there is not much that could be done but uploading the data in chuncks/tiles to the gpu.

How could I debug where the time is spent?

The chrome dev tools are the best bet. Also a small repro in the playground would help us to help you

This pg shows improvements from using power of 2 textures and compression . Reduced 200ms to 30ms for gpu time on smaller textures. Playground #13 has a comparison for gigantic textures and ktx compression. Ktx speedup is like 25% on gpu load, but overall feels way faster. Thanks @roland . Really first though, setting up a service worker will make everything so much better. Service workers run in another thread , and you can do preprocessing in them. For profiling , first u want to check network with cache disabled , then if u need to, go to performance and record a profile and sort by longest execution time.

Also, if u can say the file type and file size, someone will probably already know the best way to do it

1 Like

I’m using cubemaps so it makes it a bit harder since I have to download the 6 faces (jpg format) and I’m not sure if I can use that approach maybe our gurú @Deltakosh could say if we could create a cubemap from 6 array buffers.

First it would be nice to know what is the slow part cause I am not expecting the bottleneck to be the 6 jpg decode here ?