I wonder how an implementation in babylon would look like : O
it’ll prolly need to be run in a worker, that means data communication can be a bottleneck
Everything considered, it sounds like an amazing concept : P
I wonder how an implementation in babylon would look like : O
it’ll prolly need to be run in a worker, that means data communication can be a bottleneck
Everything considered, it sounds like an amazing concept : P
Ins’t there already 7.1 or higher headsets available that ( I mean the barriers simulation and all would not be covered by the headset alone) would help the audio to be more immersive?
I also wonder how much CPU the so mentioned rays would take just to play a sound
Great concept. Don’t know how feasible it would be to make this work along side BJS render pipeline
I mean the barriers simulation and all would not be covered by the headset alone
that’s exactly what makes this so cool tho!
I also wonder how much CPU the so mentioned rays would take just to play a sound
the video mentioned it was being done in a separate thread on a low res voxel substitute of the scene, so it was non blocking and efficient, and according to testimonies of some other folks who implemented similar stuff in the comments, it doesn’t take enough to become a problem!
Great concept. Don’t know how feasible it would be to make this work along side BJS render pipeline
yea : O
Would it be sensible to borrow GI techiniques like RSM to accelerate the computing?
I don’t know : O but it’s an interesting thing to try
light is a wave? and sound is a wave! so it makes sense that GI for light = GI for sound?
lowl it’d be so sick if we could “render” sound like that xD