In my experience, per line timings are simply unreliable and can’t be trusted (or maybe it is that we don’t interpret them correctly, I don’t know).
Your first screenshot:
You said the wrong timing for
mesh._internalAbstractMeshDataInfo._currentLODIsUpToDate = false is a sampling error, but I don’t see how Chrome could make such sampling errors… When it samples the running code, it is either running the line or not, and should account for the time of this line or not.
In this screenshot, this line takes 3x more time to execute than
mesh.computeWorldMatrix, whereas in your latest screenshot:
computeWorldMatrix that is taking 3.6x more time to execute… Same thing for
totalVertices.addCount which is faster than
computeWorldMatrix in this snapshot, whereas it is 2.5x slower in the first one.
The test you should perform is to look at the fps depending on things being commented out or not, as thanks to @Pryme8 we can now easily do these tests, and see if you can detect a significant difference (which is not easy either, see my other post above). Note that you should use the code with the
if (!this.disableTotalVerticesPerfCounter) around
_totalVertices because we won’t be able to just remove this line.
I agree we should change the order of the
if (!mesh.isReady() || !mesh.isEnabled() || mesh.scaling.hasAZeroComponent) test, though. I will make a PR for that.
mesh.isReady function, it has to check a number of things before returning
false, there’s really no way around it…
In my experience, performance profiling of js code is always difficult, if not impossible, even when you try to get rid of everything that could interfer with the results (close all running apps on your computer, use an anonymous window in your browser, run your tests several times and take the mean, …). I spent hours trying to do this, and often the end result was that… there were no results, I couldn’t make reproducible test cases!
In the present case, as we know what to test / change in the code base, we can run a real world scenario (PG, project) and measure the impact: that’s probably the best thing to do.