Raytraced audio sounds pretty cool

I wonder how an implementation in babylon would look like : O

it’ll prolly need to be run in a worker, that means data communication can be a bottleneck

Everything considered, it sounds like an amazing concept : P

4 Likes

Ins’t there already 7.1 or higher headsets available that ( I mean the barriers simulation and all would not be covered by the headset alone) would help the audio to be more immersive?
I also wonder how much CPU the so mentioned rays would take just to play a sound :thinking:

Great concept. Don’t know how feasible it would be to make this work along side BJS render pipeline :heart_eyes:

1 Like

I mean the barriers simulation and all would not be covered by the headset alone

that’s exactly what makes this so cool tho! :smiley:

I also wonder how much CPU the so mentioned rays would take just to play a sound

the video mentioned it was being done in a separate thread on a low res voxel substitute of the scene, so it was non blocking and efficient, and according to testimonies of some other folks who implemented similar stuff in the comments, it doesn’t take enough to become a problem!

eg: PolyrayGameEngine/src/polyray/audio/DCDBREffect.java at master · GiveJavaAChance/PolyrayGameEngine · GitHub

Great concept. Don’t know how feasible it would be to make this work along side BJS render pipeline

yea : O

1 Like

Would it be sensible to borrow GI techiniques like RSM to accelerate the computing?

1 Like

I don’t know : O but it’s an interesting thing to try

light is a wave? and sound is a wave! so it makes sense that GI for light = GI for sound?

lowl it’d be so sick if we could “render” sound like that xD

2 Likes

The DCDBR algorithm simply take in raw audio + raytracing samples to produce the resulting audio, so you could technically bake the raytracing samples and use it for constant effects that don’t need dynamic changes. Furthermore, since the algorithm doesn’t care about where the raytracing data came from (as long as it outputs the samples in the correct format ie. R/L volume and a delay), you could implement any raytracing technique you want, and just feed the samples into the DCDBR. You can also chain multiple DCDBRs to achieve better performance for the same result.

Here’s something you can do, one DCDBR takes the raw audio and applies a speaker effect (yes, it can do that too!) and then another DCDBR takes that audio and applies room acoustics to that (using raytracing or just baked)

I recently uploaded a video to my yt showcasing the game engine, and a part of that is showcasing the DCDBR, albeit poorly done since you can hear clicking due to not accounting for the Doppler effect, and a slow raytracer, but still, should give you some idea of what it can do!

I’ll also be uploading another video in the near future about the DCDBR in detail, and showcasing what it can/can’t do, potential optimizations, and hopefully giving ideas for those who know more about audio programming that can take this further!

1 Like

I implemented a budget version of this some time ago. Instead of raytracing I used pathfinding, and by getting the shortest path, I could calculate the perceived origin of the sound, as well as volume. Theoretically you could even use different materials, such as metals or rock, to alter the final sound.

I abandoned it due to the Very manual nature of plotting points around a game level. A navmesh might have been beneficial, but you still have to specify materials, outdoors vs indoors, corridors, windows, doors etc.

Could be fun to implement the raytraced version, though

1 Like

I recently uploaded a video to my yt showcasing the game engine

can’t just say that and not link the video xD

DCDBR actually sounds so hype tho :open_mouth:
I’ve rarely ever done anything w/ spatial sound, I’ll try my hand @ a smol web version maybe
I’ll be neck deep in your repo trynna understand stuff for quite a while ig :smiley:

I’ll also be uploading another video in the near future about the DCDBR in detail

can’t wait!
amazing work w/ polyray!