I’m making hardware suggestions for consumers who use babylon.js to render some CPU-heavy scenes in real time, with thousands of meshes and animation channels, with bones, mirrors, shadows and post processes enabled.
Choosing GPUs is simple, just buy the most latest and high-end one that they can afford, they almost always come with more FLOPS, fill rate and VRAM, but for CPUs things are really different.
Browsers and WebGL apis are born single-threaded, so the single core performance is the key, but modern CPUs come with advanced ISA like SVE and AVX512, which can greatly lift the benchmark result, but rarely used by browsers.
There are also other things to care, like cache efficiency and memory latency, like the amd 3d-vcache cpu with more l3 cache (which is advertised to perform better for games), and the 13/14gen intel with lower memory latency than latest ones, but they did not show better scores on typical single-core benchmarks like CineBench.
On the other hand, gaming test with native and modern graphics apis also shows a difference, multi-threaded animation, gpu culling, bindless and MDI comes to prod ~10 years ago (D3D12), making the native games a completely different workload compared to WebGL.
It’s known that manual turning like disabling SMT/hyperthreading, E-cores, or CCDs can give a better single-core performance with lower memory latency, and overclocking can make it even better.
Another thing to choose it the OS, linux is cheaper but known for suboptimal graphics performance for years, and latest windows 11 is said to be much slower than win10.
So here is the question, how to choose the optimal hardware for CPU-heavy scenes without actually owning and benchmarking them? Which benchmark scores should be cared most? Or is there web or babylon.js based benchmarks available for diff hardwares?
Also, overclocking, either CPU, bus, cache, or memory, should it be used in prod, will it make the system unstable like many years before?
For OS and drivers, which OS performs better for babylon.js? Should I stay with OEM stock driver, or latest ones from the internet?
I have no idea but if people know I’m interested to learn about it. On my side, I always recommend the latest AMD cpu but not really based on any science ![]()
Hey ![]()
I’ll give my point of view, but please note that since I’m Linux + AMD user, it’s not 100% impartial… Take it as is ^^
To me that’s not true, or may I say at least it’s outdated. Indeed 10 years ago Linux was lacking support of (up to date) gpu drivers. But now most recent drivers are aligned with a high GPU graphic usage for Linux, I would say maybe mostly due to AI world, and the fact that a lot of searchers in this field are using Linux… Which is (AI) by the way why I switched myself ^^
Also I’ve done a deep usage & design GPU raytracing renderfarms for Houdini, for very high CPU demanding scenes… The choice was undoubtedly made in favor of Linux because of its superior performance compared to Windows in terms of CPU (reduced system overhead, better I/O, etc.).
For the CPU choice, AMD are known to be better with Linux, while Intel are better with Windows, but maybe that’s not up to date as well ![]()
Bonus : Last week I needed a file from my windows PC, turned it on, took the file, tried to turn it off, but had the choice between Update and turn OFF and Update and Reboot, and I cried for my fellow windows users friends
… It had been a long time without not being the decider of what my work tool should do ![]()
Glad to hear that linux having better graphics these years, is it stable enough for a 24/7 rendering? Also, for browsers on linux, are their graphics impl steps behind windows ones, or better now?
Well, updates keep the system safe (and controlled by microsoft), if something went wrong, use advanced reboot and rollback there. Also there are scripts all over the internet to “disable” updated, either by some group policy, disabling system services, or banning windows update ips in firewall.
I only use Google Chrome, and only additionnal step compared to Windows, is specific to a dedicated WebGPU usage, the official version is (not yet) considered stable, so you would need to set your launch command as so :
google-chrome-unstable --enable-unsafe-webgpu --enable-features=Vulkan
But on my side I didn’t (yet) encounter any stability issue (But it’s statistically not relevant since I use stable + WebGL2 most of the time)