Custom Audio Context

Is there a way to provide the sound engine an Audio Context that has already been created elsewhere on our webpage prior to Babylon loading to avoid creating a second context?

Similarly, is there a way to specify a custom MediaStreamAudioDestinationNode as the destination for the audio engine? We are trying to use this node to grab a media stream that may be sent back out via webrtc.

Thanks!

Hey this is not yet possible but I feel like this could be useful. Would you be interested in doing a PR to update that behavior?

1 Like

Yeah, I would be happy to. It should not be too tricky because it just impacts the initialization of the audio graph, and the final destination everything gets routed to. Everything in between on the audio graph should be able to function exactly as is.

One other thing that might also be a neat feature, in the past on some of our web audio managers we have added the ability to inject custom web audio processing functionsā€¦ Something like:

var music = new BABYLON.Sound("Music", "music.wav", scene, null, {
     loop: true,
     autoplay: true,
     customProcessing: myCustomProcessor
});

where:
function myCustomProcessor(sourceNode, destinationNode){
    //custom web audio API processing injected into graph
}

Adds the ability to take full advantage of the web audio API to create sound effects while still utilizing the core sound engine.

1 Like

I would not recommend using a ScriptProcessorNode to implement your custom processing feature, as it is depreciated for almost 7 years now.

Maybe something can be done an AudioWorkletNode, but not quite an injection, since it requires a separate file.

1 Like

Processor was probably a very poor choice of words on my part thereā€¦ I should have said something more generic for injecting non deprecated API onto the graph (convolution, filters, etcā€¦).

Another option there would be to just have a sound constructor that takes an audio node as the source.

Would be interesting to have a look at example :slight_smile:

Our working examples are in apps that require user accounts to log inā€¦ but the basic workflow looks like this:

private playSoundEffect(src : string) {
	
	let audioEl = document.createElement("AUDIO");
	audioEl.src = src;
	//...
    myAudioManager.addAudioElement(audioEl, 1, myCustomAudioEffect);
	audioEl.play();
}

private myCustomAudioEffect(sourceNode, destinationNode){
	let myCustomEffectNode = //whatever works... maybe make it sounds like its on an intercom
	let myCustomEffectNode2 = //or sound like an alien
	sourceNode.connect.connect(myCustomEffectNode);
	myCustomEffectNode.connect(myCustomEffectNode2);
	myCustomEffectNode2.connect(destinationNode);
}


//In the audio manager

public addAudioElement(audioEl, gain, customAudioEffect){
	let audioSourceNode = this.audioContext.createMediaElementSource(audioEl);
	let gainNode = this.audioContext.createGain();
	gainNode.connect(this.audioDestinationNode);
	
	if(customAudioEffect == undefined){
		audioSourceNode.connect(gainNode);
	} else {
		customAudioEffect(audioSourceNode, gainNode)
	}

}
1 Like

OK. Just thinking that you might not even need specific code for a Mediastreamaudiodestinationode, if the sourcenode arg for that connection feature Can be null.

@Deltakosh Created the PR for this feature.

1 Like

I like it! just a few things to fix and we can merge