This may be an off-topic question but I was wondering if anyone has attempted server-side rendering for Babylon applications? The goal is to use a Babylon canvas in WebGL and take screenshots of output.
So far the hypothetical options are:
Experiment with node-canvas (Babylon would have to run in Node which I doubt it does)
Implement a headless browser (problem is screen-rendering and WebGL on a display-less server, in the past I’ve attempted using X for virtual buffering/rendering)
Use the client to take screenshots and send base64 data (this would only work for some things but not all)
I haven’t looked into it but I was curious about Babylon Native and how it ports over to being an embedded device. I’m wondering if there is a method in that process that could be used for SSR-like purposes.
The use cases for this include producing higher resolution images for future download. ie. imagine an architecture firm designing 3D models then needing flattened JPG/PDFs. In my case it’ll be used for capturing thumbnails of a product that a customer dynamically creates as well as supporting a browser fall-back for browsers that don’t support WebGL.
Nope, I’m shocked that there’s even a doc for that. I thought for sure this would be an obscure request.
Ah yes, the doc even says “Configuring to use the GPU”. I really wanted to avoid that. I once had to solve this problem when I wrote a scraper. The headless driver wasn’t rendering a pixel that required display drivers (one of the anti-scraper tricks they deploy).
As a result I had to learn how to use X Window System and buffer the request to at least get a render happening. The package I used was; How To Run Your Tests Headlessly with Xvfb.
I’m wondering if it would work for this. Using a Windows box is certainly NOT a viable solution, lol.
I’d suggest using Puppeteer (like the doc suggests) but, if possible, take your screenshots at build time rather than runtime. It can take about a second between opening the tab, loading Babylon, fetching the scene, etc. Its probably too long to expect a person to wait, and build time is much cheaper than runtime
From there, if your goal is screenshot of a model rather than a scene, I’d run them through imagemagick with a filter to highlight the edges a bit and then trim the background out. The lighting can make it hard to see the content when it’s a small screenshot.
I do this for generating thumbnails of items for a game, so I don’t have to manually add a thumbnail whenever I add an item
I’m trying to do something similar. We have a puppeteer instance running in an EC2 instance with a GPU. I’ve added a virtual frame buffer via XVFB, and that has enabled the use of Puppeteer in headful / headed mode, but I’m not seeing any speed improvements compared to running headless. Running a windows vm isn’t really an option we’re considering, so if possible we want to utilize the GPU that comes with our EC2 instance to speed up the renders.
I don’t think there’s any way to get any performant unless you pick a completely different stack. At minimum make sure that you give node more memory space and focus on optimising the fundamentals of your deployment.
I’m not sure if you can run this headless? If you can then obvious scrap XVFB. When I was experimenting with this I wasn’t able to go headless but my environment was more difficult than an EC2 instance with a GPU.
It definitely works in headless mode. We were anticipating a speed improvement by switching to a headed puppeteer instance because on our local machines it made a big difference.