I wouldn’t have the dev skills for that. But, you know what? I actually thought of this. I mean, how awesome would it be if we were able to generate real-time visuals from an audio (or video) stream (whether from a library or generated by the AI). How many applications for ‘entertainment’ pre-, on- or post- ? If you are, as you say, an audio addict, this is clearly the kind of project I would luv to participate (and have someone like you onboard)… At this time, it remains no more than a rough idea… and an idea goes like a ‘fart’ in the wind. You can barely sense it and next thing, the smell has gone
That is absolutely true!
Here is my humble return toast - MetaDojo Feature Playground
Click to start the music.
Tiger animation - numeric keys 1-7
Move - WASD + Z + Space
@labris you’re hiding so many cool stuff of yours from us!
Today I came across to your demo site: https://babylonpress.org/
I was playing with Color Speech Recognition Game like a child!
You may play it in the secret room as well - https://babylonpress.org/test/room11/
What is funny is that if you open EyeDropper Babylon.js Utility you may use EyeDropper from Wordpress site inside completely different installation.
Or play Tic-Tac-Toe like here
WOW. I really enjoyed this!! It takes me back to audio visualization for music players like winamp. I can spend hours watching them and being hypnotized. I feel like this is something mere mortals can attempt to work on with something like Babylon.js, which is one thing I like so much about open source. The GreasedLine is so cool.
Thank-you for sharing all the source code inline in the HTML - the hand keyed animations look tedious, but the result is impressive. Cheers
It looks better if you omit
includeInner: false
when callingBABYLON.GreasedLineTools.GetPointsFromText
I was wonderin y the “R” had a circle missin, I thought the font was stylistic lmao
Here is my humble return toast
this felt like a fever dream gawd dang : O
and it is. Clearly nothing I would recommend. But I had to go this way since I essentially wanted to deliver. Without turning BJS into a video or audio editor, I believe there would be a number of very welcomed features we could implement to ease this process and let us create more ‘multimedia’ experiences. I’m sure there would be some amazing applications for it… but at this moment as you say, it’s kind of ‘tedious’ (and not very sexy in terms of code )
How hard could this be…
Start with a list of desired audio features: what visual effects do you want to display and to what audio features should they react.
In parallel, search for analysable audio features. What can we extract at all? May also give new idea for visual effects. Probably helpful if you know anything about music theory.
Task 2 also informs the processing. Can it be done live, delayed or via a build step?
If it would be EZ, I would do it myself More seriously, there’s already tech existing to trigger lightshow and events from an audio track. It’s gone pretty far already but (as far as I know) nothing with real-time 3D (unless I missed it). So, yes, I suppose there would still be triggers of some sort pre-programmed (not just entirely analyzed from the AI)… but then, may be a random (with conditions) run of visuals and animations… something around these parts
Awesome stuff, congrats
Really made my morning, coming off a week of fall vacation into a boring grey Monday
And great music too
@mawa still gaining likes
These are shared. I’d say most of them belong to you
Welcome. I already have lots of badges I do not really deserve I won’t be missing this one (out of 43)