Does anyone know of a benchmark that’s been done that shows frame rate at various poly counts across various devices and ram count? I’m wondering how to optimize towards an ideal count, but not sure when it gets to be slow on which devices.
For our app we created a synthetic benchmark that simply duplicates a mesh of fixed polycount at fixed intervals, monitoring min, max & average FPS, draw calls and RAM at each iteration, then outputting data to CSV.
We find it’s good for comparative device benchmarking and optimisation, but there are so many factors that contribute to performance and user responsiveness.
I wonder if @PatrickRyan has done one ?
@sebavan and @bigrig, I have not done a generic polycount vs. performance study because it is really hard to do it in isolation from a set of constraints and goals. In one experience there may be a specific need that isn’t present in another and trying to mail down a generic comparison may not apply to either.
I agree with @inteja’s approach where you are testing assets that have the same characteristics you assume your final assets will have before you even start with those final assets. For example, if you know you need to pass an asset make up of a lot of meshes because you need a lot of material breaks, you will incur the costs of the extra draw calls. If your assets could, on the other hand, be delivered with one mesh or material, you would save those draw calls and maybe even give you extra headroom for a higher poly count.
There are also other factors - like post processes or the number of assets that need to be drawn in screen simultaneously - that could affect your performance budget and potentially make you want to reduce resources elsewhere.
If I were to approach this, and just wanted straight perf vs poly count, I would set up a single sphere at different resolutions and then test each one on a feature phone, an iOS device, an android device, a mid range laptop, mid range desktop, and a high end desktop. You may be able to see some trends and map a curve based on device.
That said, I did accidentally do a stress test on my desktop PC as we were getting ready for release that may interest you. This happened while creating the asset we used to show off decal maps and the following was my response to the team about the inadvertent stress:
So I will admit I was a little surprised when I did this. I exported a displaced mesh with max settings from Substance Painter just to see what I would get - there was no data as to what the triangle count is when tessellating. Turns out it was a little over 3.5 million triangles which I knew was laughably too large. And the binary for the glTF was 355MB. I figured, what the hell, I want to see what the sandbox would do with it.
Thanks to my connection, I dropped the file and it almost immediately rendered (sub 2 seconds) and the render is solid 60fps with an absolute fps around 200. Even with 167 meshes and 166 draw calls and 10.7 MILLION vertices, we still have an inter-frame ~12 ms. Spinning the arc rotate camera is very smooth and the absolute fps and interframe actually increases while rotating the camera.
All that to say… nice work everyone. This is something I thought could choke the engine due to the asset not being optimized.