Passing an ArrayBuffer to the Sound constructor erases its data (after playback)

Heyo! Not sure if this is a bug or “working as intended”, but it definitely surprised me and made me go on a bug hunt throughout my own code, and then the BJS code afterwards.


Because BJS seems to request files from disk (via XHR) every time a new Sound is created with a file URL parameter, I decided to use my own caching system. This loads the file once and stores it as an ArrayBuffer, which I then passed to the Sound constructor to avoid repeated and unnecessary disk I/O.

The first time, it’s playing the audio as expected, but the second it just did nothing. Upon looking into this, I found that the ArrayBuffer was emptied so there would be no data to play on the second attempt. I’ve traced it back to this call, which appears to be a WebAudio builtin.

Once my buffer reaches that call it is decoded and then emptied, which is unfortunate since my caching system doesn’t expect any user of the resources to irrevocably consume them (all operations should be idempotent to ensure the same resource buffer can be re-used until evicted from the resource cache).

Is this just a limitation of WebAudio and I have to copy the buffer before passing it to BJS or should this be something BJS takes precautions against? Either way, it likely should be documented. If it already is, my apologies, but I didn’t find anything.

As always, thank you for your time and your amazing work!

Actually the sound should still be able to be played back cause the created audio buffer from the data is the one kept in cache by babylon. Is it not what you experienced ?

Would be great if you could provide a PG ?

Apologies, maybe I was unclear. I mean that I can’t create new Sounds from the same buffer, because the first constructor has consumed it.

So if I have multiple sounds using my resource cache the second attempt of creating one will fail, because the resource I have cached now has an empty ArrayBuffer as its data instead of the contents of the file it originally read from disk.

It’s a bit difficult to explain in a PG, as it relies on the interaction of BJS and my application, but I tried here:

For now I’ve worked around this issue by copying the buffer before passing it to BJS, but that means when many sounds are played a lot of copies need to be made, and I would expect a single copy stored in memory to be sufficient (which is taken from my resource cache).

Sounds are generally very short-lived in my application but Resources are not [necessarily], so the idea was to simply pass the same buffer to each Sound constructor when a sound effect is to be played, while loading only one copy from disk (and only keeping one copy in memory, as well).

I would advise to keep your Sound in your cache instead of the buffer knowing the buffer operation is still slow (need to decode in order to recreate an AudioBuffer). Is is something you could do ?

This way you d benefit from the best of both, a nice cache without lot of data redundancy and a sound playable at will.

That seems like it might work for music, but there are also “identical” sound effects that will need to be played simultaneously, potentially in large numbers. These would have to be separate BJS Sounds by necessity, all using the same audio files (and therefore cached Resource data), but with each instance having a different playback state, position or being attached to different meshes/sprites in the scene.

If I understand you correctly they would require separate decoded AudioBuffers no matter what I do? But even so, do they need separate (un-decoded) ArrayBuffers also? Seems like there’s some duplication and buffer copying operations either way, but if I put Sounds into the cache then there can only be one active sound playing at a time.

Perhaps I could cache the processed AudioBuffer separately and somehow use it to instantiate new sounds, circumventing both the ArrayBuffer to AudioBuffer decoding step and the disk to ArrayBuffer step? Not sure if Sounds require their own buffer to handle playback of if maybe a pointer to the cached (and decoded) AudioBuffer would suffice? From the code, it looks like this isn’t possible, but I could be wrong.

Yup exactly that, let me dig in the api for it you could do on your side:

Engine.audioEngine.audioContext.decodeAudioData(audioData, (audioBuffer) => {
    // Cache audioBuffer for later use

// Somewhere else when buffer is ready
const sound = new Sound("game", null, scene);

Actually, @DarraghBurke raised a great point offline :slight_smile:

If your setup is fully identical, you could simply keep one instance of the sound in cache and clone it when needed with sound.clone => this will share the audioBuffer internally

Also creating a pool of those to prevent extra Garbage Collection would be cool :slight_smile:


Thanks! I have some ideas for how this could be optimized now.

However, I stopped to think and realized that long before copying buffers - measured at approximately 0.1 to 0.25 ms per Sound (per frame) - will become a problem, the noise from having too many sound effects playing at once would likely be unbearable.

I may have to revisit this depending on how many sounds will need to be played, but for now I’ll just leave it as-is and wait to see if the overhead can’t just be ignored in practice.

1 Like