Hi, in our company we are working on a webXR project that runs on mobile to show people how to prepare cocktails. We want to adapt the experience to run on HMDs. But in order to do that we are looking for realtime fluid rendering, and we have two alternatives, creating the project in Unity3D or test with Babylon. Has anyone tested if Fluid Rendering works on webXR?
Hello and welcome to the community!
BabylonJS does have some fluid rendering capabilities built in, and yes they work in WebXR. Your specific use case will determine if it will meet your needs but it can do quite a bit.
This is the overview: Fluid Rendering Demos | Babylon.js Documentation
If you have specific questions or need of specific features that you don’t know how to do, let us know!
@Evgeni_Popov is the amazing mind behind the fluid renderer so he can talk about its capacity more extensively
Note that it is really fluid rendering, and not fluid simulation. Fluid simulation is used in some of the demos you can find in the page linked to by @Calsa, for demonstration purpose, but it is not integrated into Babylon.
Feel free to ask any question you want regarding the rendering part!
Okey, and would the fluid rendering work on VR headsets @Evgeni_Popov ?
I have no idea, I don’t have any devices with that capabilities, but I guess it should work?
Maybe @RaananW will be able to give some more info when he is back.
I haven’t tested it, but I don’t see why it wouldn’t work. I’ll try later
The fluid renderer partly work in WebXR, There is an issue when using the fludi renderer with a camera that is not using the engine’s render width and height (because of the initialize function of the fluid rendering target renderer.
@Evgeni_Popov - any tips or tricks to get the fluid renderer to work on two rig cameras?
Is there a way to get the width/height used by the rig cameras?
Given what you say, I assume the problem is in FluidRenderingTargetRenderer._initialize
:
const depthWidth = this._depthMapSize ?? this._engine.getRenderWidth();
const depthHeight =
this._depthMapSize !== null ? Math.round((this._depthMapSize * this._engine.getRenderHeight()) / this._engine.getRenderWidth()) : this._engine.getRenderHeight();
So, we would need to replace the calls to getRenderWith()
/ getRenderHeight()
by the width/height used by the rig camera?
come to think about it, the engine’s render width and height should be correct if this is executed during the camera’s render loop. But it still feels very stretched when looking at it, as if it is using the canvas’ width instead of the camera’s. will need to debug why.
Also a quick question - do you think it would make sense to support rig cameras directly? I mean that when passing a camera with rig elements it will actual use the rig cameras and not the parent camera?
Hum, I can see I pass null
instead of the camera when creating the render post process…
Can you try to pass this._camera
instead and see if that helps?
It’s line 708 in Rendering/fluidRenderer/fluidRenderingTargetRenderer.ts
.