Sorry, no PG for this one as it is a bit difficult to set up in a PG. In our app we create new textures from data urls (e.g. encoded webp or jpg images) quite often (as often as every few frames). I know BJS caches textures to avoid loading them again if they are referenced. I assume that this caching uses browser memory to keep the cached texture. The question is, if I know that I will never reference a texture more than once, what is the best way to either prevent caching of the texture or to evict it from the cache? We have tried to avoid caching by creating our own texture subclass and using a raw texture as the internal texture, but to do this we needed to use a native image decoder and it does not seem to be as fast as the decoder used by BJS. Is there a best practice or optimizations for this use case where we will be swapping textures (containing encoded webp or jpg images) on a mesh frequently?
It is actually sharing the underlying gpu texture so there are no duplicates from the texture cache at least.
The only cpu cache would be for dealing with context lost that you can disable in the engine constructor options with doNotHandleContextLost to true.
Thanks. So other than disposing of the old texture, is there anything we can do to improve performance (memory or render time) in this scenario? How much of a perf difference would you expect between doing mesh.material.diffuseTexture = new BABYLON.Texture(data url) and using a texture subclass where we replace the internal texture with a RawTexture every update (every few frames).
This might impact perf a lot as you would need to copy the content to the raw texture more times.
I appreciate the responses, but I’m still not understanding the answer to my question.
I am building the raw texture for each frame from the raw bitmap. Are you saying that involves a lot of data copying on the webgl side? Should I use a fragment shader and render target texture as you did in animated gifs to avoid this?
It always depends on the exact use case and the result will be linked to both the frequency of update and size of the texture which are amongst the most impactful factors. Could you describe how frequently you need to update ? and how much variations of the picture you have ? Also are they all known upfront.
The scenario could be:
- texture update (dynamic or not)
- Gif Texture (all frames are known before hand and usually small content)
- Video Texture (all frames are known but can not fit in Video Memory)
- Streaming Texture (as well as not being known, the frames can not fit all in memory)
Those would all benefit from various usage so depending on your exact situation the best solution might be different.
Our scenario would best be characterized as Streaming Texture. We typically stream at a maximum of around 18 FPS. Each frame is unique (hence the desire to not cache textures). Texture sizes can range from small (120x120) to full screen resolution.
A raw texture in this case sounds like ok, if you can keep the “frames” in browser storage for instance so it is always “hot” to get when you need and in the format expected by WebGL to reduce the overall upload/decode cost
Thanks. When using a raw texture, would you expect any perf benefit to using a render target and shader versus just swapping out the internal texture?
using render target with shader means your data are already cached in a texture which is not the case here I believe.