Audio Engine v2 Feedback and Discussion Thread

Hey all, we’re getting serious about updating the audio engine for the next release (see blog post) and we want your feedback to make it the best it can be!

To get the discussion rolling, here’s some key things we want to know:

  • What pain-points are you experiencing when working with the current audio engine?
  • Which of the existing features in the current audio engine are most important to you?
  • What new features would you like to see (or hear :slight_smile: ) in the updated audio engine?

Feel free to answer any or all of these questions and ask your own if you want. It’s an open discussion.


I’ll start by saying that while the current audio engine gets the job done, it leaves a lot to be desired. It is very basic and needs to be updated to the latest and greatest WebAudio API and JavaScript standards.

There are also a few pain-points I see people running into a lot, like:

  1. The unmute button and why it doesn’t seem to work right in some cases.
  2. The skipCodecCheck option is non-obvious. Why doesn’t the engine just assume it’s being given a valid audio file instead?
  3. Why does getting and setting some Sound and SoundTrack properties not work correctly when things are not done in exactly the right order?

There are more issues I’m aware of, but those are the ones I see most often.


So have at it! Don’t hold back! There’s no judgement here. All feedback is important and appreciated. Make your voice heard and help us make the audio engine as good as it can be!

9 Likes

@labris @aWeirdo @Pryme8 @gabrieljbaker @Dok11 @trusktr @Dad72

Don’t miss this!

I’ve seen you all asking sound questions and commenting on audio-related posts before, so any insights you have on this would be much appreciated.

4 Likes

Looking forward to it!

I’ll keep to the topic in mind if something comes up :smile:

Edit;
Only request i had seems to already exist as soundTracks.
Never used it, assumed it was for queuing sounds to play after each other :sweat_smile:
The joys of being multilingual

1 Like

Cool, ya I don’t know why they’re called sound tracks. They’re really just buses, but you can’t send them to each other, so they’re pretty limited for a bus.

Queues I’m thinking should be called “Sequence Containers” and they’re definitely on the list of new features along with other sound container types like “Randomize”, “Switch” and “Blend” containers.

The first thing which I am thinking of is the good debugging functionality, especially for spatial sound.

1 Like

Sounds great @docEdub. I look forward to seeing how this develops. Are there any considerations being given to adding basic audio effects? I know it’s possible to route stuff around using web audio nodes, but it seems a little clunky. In-built effects would be far more accessible.

2 Likes

@docEdub
I think @Rory makes a good point about providing a few integrated effects chains. :smiley:

1 Like

Weirdness

There is something about the sound that I find strange, it’s the ‘scene’ in the constructor. It is useless because if we have several scenes and we change scenes, the music from scene 1 is also played on scene 2.
Maybe the sound should really be assigned to a scene to add different music to a scene and when we change scenes, automatically play this sound, not from the previous scene.
Currently we have to delete or stop the sound of scene 1 to be able to play the sound of scene 2, otherwise both sounds are played at the same time.


Functionality

Recording sound :
An important feature I would like to see is being able to record spatialized real-time audio when speaking into a mic. I’m thinking of specialized voice chat. It’s something modern and would be of great use in network games or conference rooms or the like.

See this link as an example of what it gives :

SpatialAudioDemo-Spatialized.mp4


Improvement

It would be nice to have 2 types of parsers by default (horizontal and vertical…) Even if it’s not the most important, but why not.

Otherwise I like all the features that currently exist. I hope they remain available.


important feature current

The most important feature for me is the specialized sound (spatial 3D sound), Sound Tracks and Attaching a sound to a mesh)

2 Likes

Absolutely! This would be a super-useful feature. We could add debug viewers for this similar to the skeleton viewer for bones which can be toggled on/off in code and/or the inspector.

1 Like

Yes! This is on my list of low-hanging features I’d like to see implemented for the next release. The WebAudio API already has effects nodes available. We just need to expose them in a way that’s easy to use with the audio engine.

Down the road I’d like to make it easier to add custom effects and procedural sound sources, too, which can be done with the AudioWorkletNode WebAudio API, but is not as straight-forward for users as it could be.

1 Like

This drives me crazy, too, and is one of the main motivators for making a new audio engine instead of trying to build on the old one. The new design will have a much clearer separation between the audio engine and the graphics engine.


Yes! This would be a great feature to have available, along with automatically switching to spatialization methods that use less CPU when needed.

I’m not sure what “parsers” means. Are you referring to splitting up the spatialization into horizontal and vertical components, and then enabling or disabling them individually instead of grouping them together?

All of the current features will exist in the new audio engine. The names of classes may change to more commonly used audio terms, but all existing functionalities will be there, and the docs will include info on how to update your code to use the new system.

This is important to us, too! …and we may have some opportunities to improve on performance when mutliple spatialized sound sources are used.

One of the features I’d like to add is the ability to smoothly transition from one spatial sound method to another based on distance from the listener. So, for example, the HRTF spatial panning method that works well at longer distances can be smoothly transitioned to a simple left/right panning method (or even mono) as the distance decreases. Would that be useful for you?

For the analyzer, I talked about this :

https://doc.babylonjs.com/features/featuresDeepDive/audio/playingSoundsMusic#using-the-analyser

apps.39205.14326066312496378.99452e37-0435-4c96-93a4-d094d5aafea0

This would be a sort of LOD for the sound. I think yes, it could be very useful and more coherent. I imagine that in terms of performance too, the 3D sound must consume a little more.

Thank you for your answers. I think that with your intentions, the new audio engine is going in a very good direction.

2 Likes

One case for example:
image
every point of this mesh should have same sound volume, because whole object produces the same sound effect.
Probably it can be solved by moving the sound object to closest point of the mesh to listener from custom code. But it would be helpful if attaching will work not to one point (center of mass) of the mesh, but for whole mesh.

And like for idea with sound effects, it’s wat I really want to have out of the box.

1 Like

In this case I would just add 1 or 3 sounds playing at the same time or with a slight delay to give a waterfall effect.
A sound that attaches to a mesh is especially useful for meshes that move. The sound of moving animals for example and others.
I’m not sure if a sound can work on an entire mesh.
I would like to make this kind of waterfall too. Did you do it?

It is situationable. For example, in the scene you can control human player character (one scale), or RC toy car (another scale), each scale neccessary different count of sound emitters.
But you can imagine any other example, where multiple sound instances will not be acceptable. Sea, electrized metal plate, hmm… wall of speakers =)

Yes it’s true. To see if this can be possible. This could be useful indeed.

A simple debug mesh for spatial sounds would be quite nice. I find myself wrapping sounds inside sphere meshs a lot so i can quickly find them in a scene, but it would be nice to enable this out of the box.

4 Likes

I’m fairly new to the audio engine and so far, except for what has already been said above, I don’t have much to add to it.

I however am facing an issue in my latest scene that’s related to video. Read video/audio. Fact is audio uses the audio engine whereas video (+audio) doesn’t. This creates a discrepancy and an issue because the audio from video and the audio from the engine cannot be handled the same. Limitations include 1) the audio from video does not show on the analyzer 2) Many properties cannot be used the same; IE: the volume from the audio engine can be pushed above 1 while the audio from the video stream is 0 to 1.

Since audio from a video is still audio, it would make sense to me if we could somehow create an audio object from the video stream (may be as an option/parameter). This would allow to counter the safari limitation to create audio from stream and would allow to control all audio in the scene from the audio engine.

I know it’s a bit offset of the current scope but thought it might be the time to ask…may be :face_with_hand_over_mouth:

1 Like

Thanks, great feedback! Streaming media into the audio engine is a feature frequently requested and it’s totally do-able.

1 Like

I’m going to need this, too :slight_smile: …will probably be added sooner rather than later so I can debug the spatial audio features more easily.