Sync code execution with sound (markers)

Dears,
I’m new to the audio component in BJS so, forgive my ignorance. :face_with_hand_over_mouth:
I’m facing a use case where I need to perfectly sync code execution (through a timer or marker or metadata) with the audio sountrack. What’s the best approach for this?
Case is simple: We are featuring greasedLines through a circular audio analyzer. I want to inject changes at specific time (audio time). I thought of creating a timer and eventually use NME with a timer but I thought before going further I would just ask… Hope your experience will enlighten me (once again :smiley:) and meanwhile, have ALL a great day :sunglasses:

To perfectly sync code execution with the audio soundtrack in Babylon.js, you have a few options:

  1. Timer Approach: You can use the setInterval or setTimeout functions to create a timer that triggers specific code at certain intervals. You can tie the timer to the audio playback time, such as checking the current audio time every few milliseconds and executing code when it matches your desired time points.
  2. Audio Metadata Approach: If your audio file contains metadata, you can extract that information and use it to trigger specific code execution. For example, if your audio file has markers or cue points defined, you can listen for events triggered by these markers and execute code accordingly.
  3. Audio Analyzer Approach: If you’re using an audio analyzer, you can make use of the frequency data it provides to determine when specific events should be triggered. By analyzing the intensity or patterns of the audio frequencies, you can identify key moments in the audio and sync your code execution accordingly.

Regardless of the approach you choose, you should be able to achieve the desired synchronization by tying your code execution to the audio playback time or analysis results. Just make sure to handle any potential delays or inaccuracies that might occur due to processing or network latency.

Theb.ai

2 Likes

@mawa

1 Like

Thanks, I will check it out tomorrow.

1 Like

Hope you are well. Thx a lot for your time and reply. Your answer pretty much comforts me in my thoughts.

Method #1 is the method I used for sketching/unleashed creativity :wink: Problem is this method is not very reliable. Basically after starting the timer, there’s no control over whatever latency or unexpected event could occur. And even if I could record the ‘anomaly’ or ‘latency’ I wouldn’t know how to fix it on the fly and prevent a sequence to trigger and stuff.

Method #2 I have been told (by @roland :wink:) that it is old school :stuck_out_tongue_winking_eye: Might be, but this has proven to be reliable. Here it’s clearly the audio that triggers the event at a given time. Only thing is I had no clue how to do this for BJS.

Method #3 Sounds sexy (from an ENG perspective :wink:) but I have the feeling it could add a level of undesired complexity. First, I don’t know enough about the analyzer and then even if I would be able to isolate a given frequency I could use as ‘the beat’, I’m not making something linear. During the sound events not necessarly happen depending on beat or frequency and defer according to say ‘a context’.
So I think I will leave this method to experiment by someone smarter than me and for a different use case. But it’s certainly an interesting idea :smiling_face_with_three_hearts: