I searched youtube but couldn’t find many videos that show real life examples of using spector to identify and improve performance issues. One area I can think of is using it to reason why the # of draw calls is so high.
Is there anything out there than can help for beginner-level performance improvement using spector?
@sebavan and @evidanary, I think there is likely a very common need for help understanding scene optimization and how to go about it in Babylon. Though the scope of something like that could be enormous. Even just documentation around Spector could be massive due to the expectation of understanding the WebGL stack to begin with. Documentation for the tool itself is fairly simple, but understanding how to use it - and more importantly WHY to use it - could be a book in itself.
At the most basic level, understanding where your draw calls are coming from is important. It’s great that the inspector information panel can tell you how many draw calls you have. If you understand how your choices as you built your scene will impact draw calls, you can likely guess how many draw calls you will have. It’s when the number you expect and the number you see are vastly different that you need to find the source of the problem. Using spector can point you in the right direction as you see all of the draw calls laid out for you. This helps you identify where you thought things were merged and they aren’t or where meshes should be sharing materials and they are not. Or where transparency is forcing extra draw calls that you don’t expect.
But beyond the surface, debugging shaders or post process effects, or offscreen canvases, or any number of other things that can impact performance would be a hard thing to distill into a short doc section. I think it’s a part of the docs that is missing, but I will have to put some thought into what to cover, how to break it down, and what to cover.
Today I ran in to an issue where the draw calls were decent ~10 and AFAIU <100 is preferred (although I don’t know if 100 is ok for mobile devices) but the GPU frame time (which I’m guessing is the only relevant frame time for webgl) was high. It turned out it was because of setting the realtimefilteringquality of pbrmaterial to HIGH. That was an easy fix as I was sorta aware of it. But for future GPU frame time slowness issues, what would you recommend I should look for in the spector.js window to locate culprit?
You gave very relevant examples of identifying where you thought things were merged and they aren’t or where meshes should be sharing materials and they are not. Or where transparency is forcing extra draw calls that you don’t expect - these will work great to figure out why draw calls are high. Anything like that on the front of GPU frame time?
Feel free to just list the pointers - I’m happy to go do the background webgl research.
I’m going to ping @sebavan or @Evgeni_Popov back here as they would be able to give you some ideas as to what to watch for in that case.
Unfortunately, a lot of optimization comes down to the tradeoffs you decide on when you are making a scene. Realtime filtering of HDR images is always going to have a cost to it and that cost can be significant even in a modest scene. I think we called it “realtime filtering” to be scary and make people think, “oh, you are filtering this image 60 times a second… that’s a lot”.
For the most part I approach a scene with a counter for how many times I want to increase costs and realize that there are only so many things we can do in a frame before we start dropping them. For example, do I need shadows? I have to draw that texture every frame. Do I want those shadows to be soft, now I have to tap that texture multiple times depending on the soft shadow calculations every frame. Do I need physics to handle mesh collisions? Do I need ray casts to know if I’m hitting a mesh? Do I need post process effects? What about lots if independent meshes that are not static? Basically, I always want to bake everything down that I can so that the more expensive features that I need are still within the frame budget.
I also try to test things out in a prototype in isolation so I can get an idea for what will be added. For example, we did a shader for a starfield in the Space Pirates demo - which you can find the code for in this repo - and while it was exactly what we wanted in terms of a procedural starfield that was different with every play, the shader was too expensive and the return on that expense was too small to be meaningful for the user. So we landed on a single texture for the starfield which removed the shader from the budget so we could afford other things in the experience. I created standalone scene for the shader approach so I could look at how much time was being used just for this asset. We quickly pivoted once we starting to bring the elements together because I already knew this method was heavy.
I think Spector is a great tool to break down what is happening in each frame, but I also think digging in and prototyping features in isolation is needed to understand the impact on your frame budget. I hope this perspective helps in some way.
For me, Spector is not a tool for debugging performance problems, but for debugging rendering problems. Spector doesn’t provide timings, for example. While it can also help you debug performance problems (by looking at the draw calls, you can see if you’re generating too many renderings, for example), that’s not its primary purpose IMO. PIX can be used instead (Windows only), if you want to see the precise times of the different graphics commands.
PIX could be useful in this respect. You can see a screenshot of its output in this post :