Who will dare to switch to another tab while watching your demo?
I saw it countless times and the timing was always accurate w/o flaws.
Only one has popped out: sometimes when you get the audiocontext automatically, the muted icon didn’t disappear. I fixed it by hiding it when clicking on the start button.
I can give away a little secret here. I started to work on line joins and caps. GRL will support miter, bevel and round joins. Caps: round, square, butt.
One of my favorite feature are the animatable dashes:
Wohoo! There goes. Share more of these and we’ll make a dedicated ‘unleashed creativity PG with greasedLine’ on the forum. We’ll see if we can hit the same number of views than the ‘original’ (burried and next revived ) ‘Examples from the Playground’ topic
I am. I’m also mostly aware of all the features i have NOT implemented
So, what’s the context here? Did you switch tab/lost context? That’s the only identified case on my side (before delivering to @roland… only so that he can be made responsible for all the gaps of course Smart move, isn’t it?
Maybe there is some confusion here : there is no issue on my side
My question was :
I was just wondering if you manually modelled and triggered at a specific timestamp the shapes (heart, fire) at the exact moment it’s pronounced, or if you some how used some kind of AI (transcription + shape generator, etc)
It’s all manual. Old school designed and timed from markers (markers outside BJS of course, since this feature is still on wait). If we had that, we would be able to trigger events from the audio stream and so avoid all of the problems when loosing context or going async.
Well, apparently there is. You just shared the video. Obviously, this is not part of the experience.
Just do it
But get prepared to have your topic strongly bullshited by all the Python2Babylon old project I might recode using GL
I have tons of funny python projects only using some OpenCV lines… So much the stuff to be recoded in the Playground with GL !