Ideas on beefy offline render?

I put a lot of work into a Babylon scene for a presentation but I’m now stuck as I’m looking to render with combinations of resolutions, render features and frame rates that just would not be practical with browser rendering even with tricks like scene optimizer, offscreen canvas, a beefy machine with an expensive GPU card, etc.

I also need to get the output videos into possibly 4K 60FPS MP4 files but I cannot even get decent webm canvas video recordings even at low FPS and just HD resolution prior to ffmpeg conversion to MP4 as the quality is grainy, video recordings randomly cut off too early, audio & video are misaligned and a bunch of other issues trying to record with a browser. I see a need to think outside the browser box.

I’ve seen various write-ups & docs on server-side rendering such as with Puppeteer and even some attempts to render heavy frames then assemble them into a video at the desired speed but I don’t see a clear scene-to-recording recipe for my use case. I do not need the render to be interactive or done in real time. It can take hours to render a 3-minute video on a server with a lot of CPU cores for all I care. I just need the resulting MP4 of a heavy rendering to be at the desired FPS and resolution.

Best ideas to try?

I too have a similar need and basically in the same boat as you. I think you are on the right track with Puppeteer? Have you looked at this doc.

I would try to run through the sample they have there as a primer.

Steps to get this to work on a remote cloud server would be.

  1. Figure out what kind of GPU machine you want to deploy on the cloud. Like G4 instances from AWS.
    Amazon EC2 G4 Instances — Amazon Web Services (AWS)
    these are the same instances used by render streaming services like Stadia.

  2. Configure and deploy your renderer to the instance. (you can use something like docker for more automated deployment and scaling). The trick here is that servers require some tricky configuration to get graphics to work. There is some pretty good documentation here.
    Building a GPU workstation for visual effects with AWS | AWS Compute Blog

  3. Send your server the required events to initiate the rendering (path to scene file, desired start and end frames, desired resolution, output format…etc.). Lets assume the renderer exports a sequence of PNG (via Puppeteer)

  4. Once the frames are rendererd, you could convert the resulting sequences to MP4s (on the same instance if you wish)
    Using ffmpeg to convert a set of images into a video

Admittedly, I have not tried these steps myself, but about to embark on this journey. Let us know your findings and if you have any success, I will do the same

Regards,
Anupam

3 Likes

Yeah, lots of uncertainty without a known working reference. For horsepower, not even trying to jury-rig GPU access or mess with cloud instances, I was just going to throw unused cores from my Linux hypervisor farm at it. I can feed it 16+ cores of Xeon easily, maybe a lot more if I move some VMs around. I’m not even worried if it runs slow as long as it produces the final MP4 video result at high res and full FPS. I just don’t know how to get the hinted Puppeteer build in the docs to pump a working per-keyframe render dump of images then stitch all those thousands of images together into a single MP4 with the audio. It felt good to finish the scene animation then I hit this wall getting it to a recorded video. Maybe somebody will have worked this out before to avoid reinventing the wheel. A journey ahead indeed…

Are you attempting to record specific user actions or a series animated of events?.

Perhaps you could automate sessions of Open Broadcast Studio to do screen records.

https://obsproject.com/

Here’s an example to on windows to auto start a streaming session, I’m sure one could start a recording

Any kind of browser rendering, regardless of how it gets recorded via canvas or external utility is still going to be an issue. I need to figure out how to get off-box rendering to a video with audio. It looks like server rendering frames w/ Puppeteer might work then ffmpeg would suffice for images assembly but all my various voiceover audio tracks would be lost and would have to be rerecorded as a single long monologue that likely won’t line up timing with the video. Even if I threw out a lot of my detail work and tried to get higher FPS at higher resolution, I don’t think a browser is going to pull this off. Anyone know if it’s possible to use one of the render on a server recipes but just save to video with audio instead of all the image renders & audio loss?

Yes, this can be done, but it’s not trivial. The best solution depends heavily on how complex your content is.

Puppeteer is a good solution to automate the browser, and you’ll need some way to spin processes and queue them, as well a pipeline to process data. There are several ways to save the video: you can dump PNGs for every frame, record the video straight from the browser or even stream it in real time. They’ll all have some gotchas for a production quality implementation.

If you are just adding audio later (such as a sound track) and you don’t have strict time requirements it’s not difficult to do it in all of these cases. If audio is linked to events, like in a game, things are harder. If you can’t render your scene in real time things will be more complicated, and I’d go with a separate process to render the audio.

I’ve done similar things before. It can be done but it’s not a pleasant or quick implementation. You can ping me if you want more details.

1 Like

Here is something that may help

I have weeks of work invested in a Babylon scene, so I wasn’t thinking of trying to ramp up and pivot into Three. I’ve not yet found a good way to do this offline/server render in a way that keeps the presentation audio tracks intact and aligned nor is it a trivial render & reassembly process for what seems like a simple requirement to render outside a browser to get a arbitrary resolution & FPS. Best solution I found is I know somebody with a seriously overkill gaming machine including $3K nVidia board that offered to do the render for me of FHD at full 30+ FPS in a browser. Seems like a kludge to have to throw brute force at something that should just be a software feature but I might have to go that route. It would be nice if Babylon could do an offline render like this and even record virtually lossless to MP4/H.264/AAC. Wish list item…

@Cedric could Babylon Native help in this ?

Yes, BabylonNative can help. It’s possible to render offline and save successive frames in a .MP4. Saving individual frames as .png is possible as well.

@JasonS can you elaborate on the BabylonJS features you are using in your scenes? Can you share some scripts/data so I can test it?

1 Like

Interesting idea. I didn’t know Babylon Native could do this kind of offline render or save right to MP4. I wonder if it could keep the audio of the presentation intact as well. I thought that was all still alpha/vaporware stage yet. I’m not doing anything I’d consider taxing to the render like complex shadow, fog or extensive texture work and I certainly don’t have 100K clouds of clones or spites in motion. It’s a proprietary presentation almost like slides with audio and I probably don’t have more than a few thousand mesh objects total in all the scenes combined. I get close to 30 FPS at times approaching full HD on a powerful gaming laptop but this browser-based WebGL rendering really drags down and the quality of the supported webm video recorder in most browsers is terrible. I think I need to look into Babylon Native. I’ll go read up on it today.

2 Likes

@JasonS Babylon Native doesn’t support sound (playback or record). This might be a concern for you.

1 Like

Ouch. That is a show-stopper that would likely resort in a mess of trying to insert audio tracks in some video editor like I’d have to do with offline server rendering of individual frames. Maybe Native is not where I need to render this. It doesn’t support it as in roadmap not yet implemented or just isn’t in the cards?

For Plan A, I can try a brute force beefy machine with a browser render to shoot for just FHD 1920x1080 30 FPS and an external video recording util outside the browser and maybe Plan B is the server frame renders with ffmpeg splice and video editor to add audio. It just feels like with something as cool as Babylon that there should be a better way to do this.

1 Like

Jason, sorry for being unclear. I linked the three example just to show you a viable approach for streaming and capturing pixel data from a canvas over websockets. GitHub - Jam3/gl-pixel-stream: streaming gl.readPixels from an FBO is the core piece but the one i linked has electron and budo and sockets all setup

1 Like

OK. I’m not sure how to put the pieces together on the frame buffer pixel stream recording from canvas mechanism. Is there a working example of doing something like this with Babylon to look at?

Blake sent me an idea to use a loop to render each frame client-side so maybe I won’t have to settle for low res & 30 FPS but that still goes down the path of video assembly from thousands of frames with something like ffmpeg and then figuring out how to add the per-slide audio tracks, which might have to be in a video editor. I don’t really care how long the render takes (flashbacks of rendering network navigation visualizations in POV-Ray two decades ago) I might just quick take up the offer to browser render in realtime on an overkill machine to see how that goes before embarking on what seems like a bigger adventure down the keyframe image assembly path.

1 Like

Building a MP4 with the frames and ffmpeg is very easy. You’ll need considerable disk space but that’s the easiest part of your problem. If you handled POV-Ray 20 years ago this is trivial. If I were you that what I’d do: control the delta time in your render loop to ensure the frame rate you want instead of wall clock, output the images in any resolution you want (because it can take as long as needed to render) and then merge it all with ffmpeg. You can write a tiny web service to save images from the browser, or run it in Electron.

Regarding audio, what I’d do: save the audio events from your 3D app into an array. This should be an easy change in your software, and you’ll end with an array like `[{audio: “ping.wav”, start: 0.16}, {audio: “ping.wav”, end: 0.66}]" etc. You can also do it with ffmpeg. It’s probably easier than adding it in a video editor.

I’d take care with the overkill machine approach. You haven’t mentioned what your bottleneck is, but it’s probably the CPU, and I don’t think it’ll be much better with an overkill machine. Have you read about optimizing your scene?

4 Likes

Here is example of real time browser render of live concert inside our GLTF model with multiple video and hi-res audio streams.

We had no problems to record 4K video on beefy machine or external video capture device.

While there is definitely a great need for the offline render from Babylon to video still there are no ready-to-use solutions (even just for hi-res videos with no sound). Would be nice to have such extension!
There is very interesting article about doing this with Three.js - Break Free of the Realtime Jail: Audio-modulated HD Video With Three.js - DZone

4 Likes

This could be a great way GitHub - w3c/webcodecs: WebCodecs is a flexible web API for encoding and decoding audio and video.

1 Like

OK, folks! I finally managed to pull this off. The end result, which seems intact when I proof it, is here: What are NetCuro Managed Network Services (MNS)? - YouTube (draft explainer presentation for new start-up about to launch, scene was weeks of in-my-spare-time work)

Here’s my how-to contribution to the Babylon community with step by step instructions! To avoid to pain on per-frame renders, splicing and trying to align numerous audio voiceover tracks from Google TTS API, I went with the brute force real-time render. One of our other network engineers had an absolutely wicked gaming rig with many-core AMD Ryzen, RTX 3090 (I’m also an nVidia fan boy), we used freebie OBS & ffmpeg (close all apps then admin PowerShell “choco install -y obs-studio ffmpeg” for you Chocolatey people) to do a high performance integrated-browser canvas-only render direct to 4K MP4/H.264/AAC @ 60+ FPS then made downgrade versions with ffmpeg like the FHD/30FPS version compatible with YouTube. I’m sure if you have a lesser GPU card for mere mortals, you just have to lower your resolution & FPS targets to your performance reality.

First run OBS with nothing else eating resources, accept first time defaults, open full size on big monitor then…
File > Settings > Output >
Recording Quality: Indistinguishable
Recording Format: mp4
Encoder: [try] Hardware (QSV) [might work for you but if not try Software (x264)]
Video > Base & Output both set to 3840x2160 (most high end GPUs should at least do FHD 1920x1080) and FPS set to requirements like 60 or 30
OK to accept settings
Click the little speaker next to the gear on Mic/Aux at the bottom to mute it and exclude it as an audio source
Note the Start Recording button on the Controls bar usually on the right but don’t start yet, just be ready to hit it. That’s also the button you hit to stop the recording.
Under Sources at the bottom, hit the “+” and add Browser > Create New > OK, width 3840, height 2160 (lower to suit your horsepower), put in the URL of your animated scene all ready to render
Get ready to be quick on start/stop… Hit Refresh below the browser window. Get ready to click start recording the moment the orbiting sprite & Babylon logo leaves then keep recording until the scene animation ends. It will crank for a short time then save out the resulting mp4 file. It may take a couple attempts to get the timing right and a clean unlagged render. See how yours comes out. In the lower left you may get error messages during the recording so take note of them but keep going.

After verifying the golden 4K/60FPS master worked great, I made downgraded versions with ffmpeg like this example:
.\ffmpeg.exe -i vid4k.mp4 -s 1920x1080 -c:a copy -r 30 vid1-fhd-30fps.mp4

That’s how I got this realtime render performance to work. Your mileage may vary. Objects in the mirror may be closer than they appear. Ask your doctor if it’s right for you. Disclaimer… Disclaimer…

6 Likes

I’m aware this issue already has been resolved, I just wanted to throw in this library which probably would have come in handy: GitHub - spite/ccapture.js: A library to capture canvas-based animations at a fixed framerate

1 Like