@TooCalm, the approach I would take would have been similar… multiple noise textures that have their UVs animated in both offset and scale at different speeds and vectors. I would have likely baked one texture with 4 masks, one per channel, and used them to multiply or add against one another so that you have an organic shape that changes over time. It would be sampling the same texture four times in the shader, but channel packing saves on download and size in memory when loading four images, so it’s a tradeoff.
The reason that I would bake it ahead of time is that texturing applications like Substance Designer can utilize several costly processes that can’t be run in real time to make realistic textures, and you have a lot of control to create cloud masses that feel right. This is the planet that I made for the Space Pirates demo for the Babylon.js 5.0 release. This entire texture set is procedurally generated so all land masses, colors, clouds, polar caps, etc. can be regenerated just by changing a few parameters and a new texture set can be saved out.
Trying to generate this in real time would be possible if you do it as a texture generation step at load, but it will take time. Trying to do this in shader at 60fps is unrealistic. For example, generating one iteration of a cloud noise in Designer in that graph is taking nearly 18ms:
I have nearly a dozen of these nodes in my graph and that’s just generating the noise, nothing about modifying, combining, warping or any other operation I do to them once they are generated. This leaves us with a few options.
If you want to generate the noise procedurally and then do the combination of multiple noises at runtime so that you always have different masks to work with, I would build them as procedural textures using node material. This way you have control over the way the noise is built, but you only generate it once. Then you can pass this texture to your shader that is combining the textures and animating them. This way you are only generating the noises at launch of the experience and not needing to create the noise every frame. I would only try to generate the noise every frame if I need the noise pattern to change in real time. In this sense, it’s a matter of understanding the diminishing returns.
If you are using dynamic noise regenerated per frame, how visible are your cloud shadows? Are they subtle enough that your users won’t notice the changing in the mask shape? If you put compute resources per frame to generating these shadows, what do you need to trade for it? What devices do you expect to hit your experience… only desktop machines, or will you have mobile or low-end devices as well? Every cost will have an impact on your frame rate and not all devices will be equal in terms of their frame rate due to graphics card (or lack thereof).
If your experience is centering around the clouds, I can see allocating more resources to them. However, if you have an entire environment of assets to look at, a simpler approach to the clouds may be enough to make your scene shine while not taking too many resources.