[I’m new to mipmapping and there seems to be almost no documentation on how Babylon generates mipmaps, how many and what the costs are. But I gathered from the source code that it just uses an underlying webgl call to do it.]
The generated mipmaps are pretty blurry compared to resizing with an image editor, I will probably create a playground tomorrow to show this.
My questions (I tried to answer the first 2 myself):
is there a way to use a different scaling algorithm for the mipmap generation? (seems like no, based on the webgl call)
what are the costs of the default mipmapping feature? (I suppose it’s just a one-time cost at generation time, I already checked having multiple levels in view does not increase the amout of draw calls)
is there a way to supply custom mipmaps, but keep the nice feature that selecting from which one to sample is still done by the engine?
1/ No, because we call a webgl function to generate the mipmaps, as you noticed.
2/ As you say, generation is a one-off cost, so it shouldn’t matter. However, if you generate custom render target textures and enable mipmaps for these textures, mipmaps will be generated every time (so, probably every frame)! This may not be a problem (mipmap generation should be quite fast), but it’s still extra GPU time that you can save if you don’t enable mipmaps.
3/ There’s currently no way to provide custom mipmaps (@sebavan?), I don’t think, as no one has asked. But if someone is willing to create a PR for it, I think we’re okay with that!
yup correctly you can only do it with a dds or an env file
There is nevertheless a function on engine called _uploadImageToTexture which allows uploading the correct mips but it would need a bit of wrapping to be exposed nicely.
I also saw here that WebGPU might not support this generateMipMap function, so it may be needed to implement it in Babylon for this reason, maybe that is relevant in this discussion:
env texture for instance stores the mipmap in a really special way for simulating roughness in PBR materials so we need to be able to load them as we see fit.
In WebGPU @Evgeni_Popov implemented a function to generate the mip map.
This is safely usable on engine but just not exposed at the moment cause it is so niche it is usually associated to a higher level feat like env map, dds, compressed textures… What is your use case ?
the use case is static textures in a 2D view that I want to look as good as possible in different sizes (in a single draw call). I can make an implementation myself, but I would like to reuse as much engine code as possible and would be afraid to hurt performance.
Actually, it’s specific to WebGPU and is simply doing X rendering, each one being half the size of the previous X-1 rendering. It is doing a bilinear filtering to compute the mipmap X from X-1, so I suspect the result is more or less what you get in WebGL when you call the existing mipmap generation function.
In your case, you will probably want to apply a different algorithm when you go from mipmap X-1 to mipmap X.