Thank you @JohnK. : ) I will make that improvement.

Below is a short screen capture running in the browser…

We animate from a MASSIVE SCREENPLAY (finished in 2010-2015).

This first episode was chosen, because we thought it would be easy. It wasn’t! But…

We would love to see anyone to do something similar - easily.

We think CINEMATICS might help teens learn to code BABYLON, movies OR short cartoons.

Ah jeez, there I go again…

The little grey squares is how we edit the ANMPATH in BABYLON (at runtime).
Before we save the POS/ROT/META as ANMZ object, for reRENDER (at runtime) from aSCRIPT, through the FRAMESTACKs, separated in SEQs. A Life-cycle Example. All Great Patterns!

Recorded with OBS Via @Vinc3r and @PichouPichou chat.
Thanks for that link! It was an integral piece of a fully-open-source pipeline to produce 3D movies.
We detail that “3DPipeline” (for some to attempt and share improvements). If you like. Of course.

@Deltakosh -> FIRST LOOK. For Tic-Tok 2020. :partying_face:
UPDATE: full-length-short in long_list_improvements_phase NOW. Very exciting.
We make many CONCEPTS to share-back. Here as rough-draft, polished around xmas.


Thank you. Creative~fuel

Just wondering, what happened to this?

1 Like

Thank you for asking @Null - you bring this back from dead.

Answer is I cross-post over to @labris work on DUDE-STORY.

LINK: DUDE STORY - Watch Tower - Episode 001

It contains an important addition. CINEMATIC-NAMING-CONVENTIONS

We found great benefit in creating a LEXICON for describing Cinematic-Animations.

EXAMPLE 1: We call ALL animations -> ANMs.

TLDR; Purpose of the text below is to capture some of the many NAMING-CONVENTIONS, a concept we found essential in PROGRAMMATIC-CINEMATOGRAPHY.

Off the top of mind…

The number 1 (surprising) takeaway recently:


using Theater and Movie metaphor in short capitalized bits.

We arrive at a heirarchy of:

CURTAIN, SHOW, [Manifest, Module/Asset], EPIC, SEQ, FRAME, ANM.

We think of it as an extension (or extrapolation) off of BABYLON.scene.

Cinematography gets confusing quickly. As seen below… the UPPERCASE letters is how we simplify.

OVERVIEW: we needed a STANDARD way to say:


That language (we found essential) to program PRECISION … cinematography.

For the old guys… the concept is similar to … UML.

  • Simple conventions above are CAMs and FOC and HERO. We call all cameras CAMS.

Simplification Example of CAMS:

initFreeCAM(); initFollowCAM();

  • And frequent use of Focus Targets is simplified to FOC (short for focal point).

FOC - was the confusing ANM that inspired the language.

We needed PRECISION ways to define and track and modify ANMs.

And it gets better!

The language enabled, us to go into difficult territory and label extreemely complex things… simply.

The most interesting… POSROT dynamic PATHS.

Objects we call POSROTS and POSANMs. Trust me, there is a lot to it… check it out!

  • POSROTPATHS - are JSON objects of position and rotation.

They can be very long, and ANM on SPEED. We’ve advanced them in many ways. Alll of which came as a surprise… every time (details below). In short, the following was unpredictable territory for us.

POSROTS are VISUALIZED (VIS) (with colored line) and EDITED (box) by a single line of code

Anything can be edited like this:

namespace.edit.masterEditor(any mesh or path). [inspired by gizmo]

Then at runtime, we follow a simple workflow to create many ANMs:

edit and PUBLISH POSROTPATH to console, then copy buffer.

paste the edited POSROTPATH into the code.

And comment out //masterEditor(path);

That is the workflow we use.

We extended this to work on RIBBON, PATH, and MASTER (position and rotation a mesh).

  • The surprises:
  1. we had to “decompose path” because too many points.
  2. We had to truncate the long precision numbers for shorter paths.
  3. Also sometimes we want to trim out ROTS for straight ANMs etc.
  4. Some FRAMES need TRIGGER. So we have easy way to trigger… any single frame.
  5. Meta objects on any ANM FRAME can TRIGGER any other ANM.

It is … fun.

PRINCIPLE: (everything precise and lightweight)

  • We also make ZONEs.

We make dynamic ZONES with a ZoneFactory.

In GameMode - ZONES often TRIGGER MOVIEs.


GAME-TO-MOVIE transition… (G2M)
And MOVIE-TO-GAME transitions switch back and forth.

There is also a few others. : )

The surprise with ZONEs?

Loading and unloading. Done per EPIC.

Principle: we dont want a single zone taking up loop-space that isn’t being used.

So we have a ZONE manifest concept. Unload everything, then load the manifest, each EPIC.

We call it EPICINIT() and EPICEND()… init/clean up ZONES, HEROs, PROPs, etc.

Other surprises…


For ZONE-TRIGGERS (and FRAMES) - there is and important simplification concept of ONETIME.

PRINCIPLE: ONE-TIME, never fire any ANM twice.

We do this with a single line of code (simple state flag) on the (frame or) trigger object itself.

if (!thisFrame.init){ thisFrame.init=1; startANM(); //ONE-TIME … }

For sequencing animations… it is used often.

  • TIME. We call it DUR or SPEED.

PRINCIPLE: Simplifying time… is good.

We didnt want every ANM to be based on DUR.
Alternatively, we emphasize: SPEED, TRIGGER, DONE, then if no other resort - DUR.

Because there are often, multiple TIMEs. Each easily confusing (hard to name - magic tokens).
And we strive for PRECISION.

EXAMPLE: Curtain Fade out time, Curtain Black time, Curtain fade in time.

curtainFIDUR(), curtainBLACKDUR and curtainFODUR().

  • Second example of avoiding time. Most HERO SEQs use TRIGGERs.
    Surprise there… new movements usually occur - after HERO is DONE… talking.

So we see often, SEQ-ANM TRIGGERS on TXT.DONE. Not time.

  • And relative SPEEDs.

Slow Mo is cool…
… for that we prefer reduced SPEED (not reduced DUR).
Subtle concept that simplifies animations.



With loops that LOCKOUT. Is an easy way to STOP TIME.
LoopLockout - have many ANMS in a LOOP, and easily stop all of them like this…


if(!movieMode){return} //LOCKOUT

  • and 60 FPS is ensured.

TLDR? Yep. But, PROGRAMMATIC-CINEMATOGRAPHY is a passion. So, for that person. I hope you try. I want to help you advance faster…


Programmatic-Cinematography is enhanced by a good NAMING-CONVENTION.

Interactive Movies in BABYLON are a certainty.


Can’t call it FILM.

Start with YOUR STORY. Then follow your JOURNEY.

Thank you for interest in BABYLON.cinematics (concept).




A simple NAMING-CONVENTION is the transition into programmatic~cinematography.

Because they become function in namespace.


So many more, we try:




We work every weekend on these CONCEPTS.

Here is the cutting-edge:

With Irony, for 3D, same concepts - no film.

: )

1 Like


If you are NOT watching FORTNIGHT LIVE EVENT Season 10…

Here is what happened:

The map just exploded knocking all the players out into space.

After a comet with rockets showered down.

And we were all sucked into a vortex.

“That is a fantastic shader!”, I said to my son.

And we are all staring at a BLACK HOLE, spinning in space.

After half an hour, we received “Rare Achievement”.

So we google to find out what is going on…???

Only to find out the servers are DOWN.

The whole family is waiting… for an hour now… for the next cinematic.

We think this is the BEST EXAMPLE currently in the future of media.

Except we tell the story with 3D~Web~Cinematics.

If by chance they choose to zoom through space, into a nebula, then down to a single planet… for the next story. That is the effect we are developing… between many, many, many, exoplanets.

Albeit without great shaders (yet).

Cross your fingers… will update whatever happens.


1 Like

@labris, since it’s just you and I. We might as well talk. : ) Lol.

The LIVE EVENT- it was a massive FAIL.

The climactic BLACK HOLE… it stagnated for hours.

We grew bored and left. Never to return.

But with minimal effort… it could have been… easily cataclysmic!

One Shader - well within our abilities.
And they chose a spinner…

A FAIL to be sure - but, not without insight.

The WEBCINEMATICS - there is more too it than we give it credit.

It is an untapped keg. There is something additional here.

What Fortnite is doing… is but a botched beginning!

And they dropped the ball today - with a massive THUD.


We can pipe unlimited visuals through the web - quickly and powerfully.

The only questions is - what~to~ publish???

Once you know what this is, with crystal~clear~focus… SeizeIt.

Look directly at your NORTH-STAR, and rocket~right~at-it.

So that you can make that vector - real.

Then I cannot pretend that the JOURNEY is easy. It is dreadful.
A dreadful, drudgery of toil. EVERY DAY - every Sunday (all year)
all the while… enlightening… every step.

Because of a deeper reason to choose it.

And I think… with every possibility that I am tragically flawed…
that given one moment to speak…
through such a powerful megaphone of cinematics on the web…
maybe, just maybe, the sense is that…

there is something_important_to_be_said.

This is what the FORTNITE LIVE EVENTS - continue - to miss.

And that, is worth the struggle.

: )

1 Like

@aFalcon well I could say a lot but for a bext couple of weeks will have to be short due to heavy loaded projects :slight_smile:

Media is the message; NEW media gives chances to NEW messages.
Or to implement old good messages in a new form.
3D Web is only in it’s beginning, but it already gives us infinite possibilities to communicate in a new way.
Before there were no tools for 3D experiences on every Web-connected gadget without additional applications.
Our flat screens and displays now can have additional dimension.
And quite soon 3D Web will become holographic, and we shall be able to create 3D models like real sculptors or artists - it will be another creative breakthrough… time for other forms of books, cinema, games etc…

1 Like

Being quite old myself I sometimes find grasping new ideas quite slow. Given that I know nothing about film directing please forgive the incorrect use of words. Does the following accurately describe what you are developing a new BJS CINEMATIC scripting language for?

  1. You start with a cinematic idea, for example - In a quiet street Jack and Jill sit on a bench talking. As they talk a UFO passes across the sky. While it crosses the sky they look up, remain still for a moment and then run off.

The scene contains props that do not move such as the bench and buildings, with actors that do move, eg Jack, Jill and the UFO

  1. You begin to break this down into directed timed sections for the actors and for the cameras. In my own very simplistic way:

Film sequence takes 20 seconds.

0 secs for 4 secs camera A close up on J & J talking
5 secs to 7 secs camera A pans out
7 secs to 10 secs switch to camera B wide shot
8 secs to 20 secs UFO flies across the sky
9 secs to 10 secs J & J stop talking and stand up
10 secs to 15 secs J & J stand very still
15 secs to 20 secs J & J run
10 secs to 12 secs switch to camera A pans in
12 secs to 20 secs switch to camera B and track J & J
12 secs to 20 secs switch to camera B

  1. Construct props, actors paths for actors and cameras (and lighting etc) in Babylon.js

  2. Produce a cinematic script in a language you are creating based on the language of cinematography that will produce the wanted directions that are in 2.

  3. You script will be saved in a JSON file, read by Babylon.js that will produce the movie.

If this is what you are doing then WOW what a big task and
a. is it just you?
b. if not how big is your team?
c. Is it a commercial project or will it be open source?



@JohnK and @labris

“3DWeb” and “Programmatic~Cinematography” and “Web Cinema”, and “MOVIE GAME BOOK APP”.

Public Domain words for the methodology. Please use.

Zoom into an “egg-shaped robot” zipping through space, this 2020.

End with an AMAZING-EXO-PLANET … on a single website.

An Epic Web Saga. Monthly(?) Like a comic book, but 3DWeb.

and we think you should build one too.

The answer is yes to comments above.

Yes we plan to OPEN-SOURCE and share~back… how it was done.

It is just a bunch of OBJECTS with FUNCTIONS in a LOOP.

There is a repeatable boilerplate. That extends from SCENE. But mostly a DESIGN PATTERN.

You could do things… very differently.


Sequence frames (SEQ and FRAME), with ANMS inside, in a LOOP at runtime.

That is the key. Another…

REQUIREMENT: we found the ANMS needed to be highly COMPACTED or ATOMIC (modular). Because in practice, they tend to move around from FRAME to FRAME and sometimes SEQ to SEQ -before finalized. So we design for that.

JKing - quite certain BETTER variations exist.

I am a SOLO~ARTIST (with help). Since YOU ask: we see much more for SOLO-ARTISTS with CREATIVE-CONTENT in the GIG-ECONOMY.

The purpose of those CAP WORDS - is to simplify the ANMS. Which we find to be naturally complex and easily confusing. So it is a NAMING-CONVENTION for functions and objects - at its core. To our surprise… it extends.

Inspired by UML. But entirely NECESSARY for ANMS. First to COMMUNICATE, but then…



See how YOUR complex SCENE can be SIMPLIFIED into a single readable sentence?

That is STEP 1.

STEP 2: We use that sentence to create the COMPRESSED ATOMIC ANM (that moves around to any FRAME. To EDIT them easily. That is STEP 3.

We try to limit TIME-TOKENS. Just like MAGIC-TOKENS. We find benefit in a PRINCIPLE, that says: Let ANMS interpolate SIMULTANEOUSLY, using DONE, ZONES, and TRIGGERS where possible.


nx.spaceZOOMSEQ[2] = {on:1};

That FRAME-TRIGGER, came from a DONE function, of a TXT, where some HERO… finished talking. And inside it is an ANM for something.

That’s how we did it.

This is top of mind. Sorry. EXAMPLES later. Must move quickly (7 days a week). Animating eye movements (again).

I try to WRITE BETTER. So thank you for PATIENCE with (long) BRIEFS. They get better. Slowly. And used to get the mind going - then crank out tons of BABYLON (and BLENDER) code inbetween fullstack JS mentor sessions. 3 Years soon. Very tired, but… very exciting too. Hard to explain and contain. Thanks.

Someday, as labris says... yes, we push MOVIE GAME BOOK APP to a website.

Then double-back and share~back, how it was done. So that anyone else can do their own.

BECAUSE that is what we would have wanted.

And CLAP when YOU do it BETTER.


We want everyone to have their OWN (successful) “WEBGARDEN”. Why not?

We want to see organic open-source “Web Cinematic Arts” ,

BECAUSE we dream in …3DWeb

  • many people making Web Cinematics in BABYLON.

It will happen. : )


1 Like

Cinematics means CHARACTERS.

I once had a friend who drew a hard-line at “character design”. When I plied him he wouldn’t budge!

“Oh I don’t do character-design. But I would let you do it for me!”, he would say. : )

I thought about this for the last 5 years, and arrived a PRINCIPLE:

Your HERO is the center FOCUS of your CINEMATIC - and you would outsource this?

CHARACTER is where I suggest you devote your most intense FOCUS.

Don’t block yourself with artificial barriers.

Instead, if you see an artificial barrier - that is the exact direction to GO BOLDLY.

We rush that direction, every weekend, all year. And that exhaustive vigor, is what results in CHARACTER.


image hero.lookFactory(‘lookUp’)

image hero.lookFactory(‘lookDwnFwd’)

:eagle: : )

1 Like


We are over the moon for today’s innovation of a SMARTCAM.

It has been in THEWORKS for some time.

Today it finally faded into focus. Opposite of prior designs. Simpler.

nx.initSmartCam = function(){ //SmartCam - keep hero in sight-.
nx.scene.registerBeforeRender(function activeCamLoop(){ //Check hero TILT to raise CAM-.
if(++nx.SmartCamLoopDamper%10!=0){return} //DECIDAMPER-.
if(!=‘FollowCam1’){ nx.scene.unregisterBeforeRender(activeCamLoop); } //Auto Unload Loop-.
if(nx.SmartCamFaceTime && nx.hero.isIdle() ){ //console.log(‘RUNFACECAM’);
} else if (nx.hero.isIdle() && !nx.SmartCamFaceTime){ // console.log(‘IDLE’);
nx.SmartCamFaceTime=1; //console.log(‘STARTTIMER’);
setTimeout(function(){ //console.log(‘TIMEREND’);
if(nx.hero.isIdle() && nx.SmartCamFaceTime){ //console.log(‘SETFACECAM’);
//TODO: nx.initFaceCam();
nx.camz.followCam.radius = 20;//25; //distance from tgt-.
nx.camz.followCam.heightOffset = 8; //distance above-.
nx.camz.followCam.rotationOffset = -120; //rotation around origin to FACE-.
nx.camz.followCam.cameraAcceleration = 0.008 //amount cam moves-.
}else if(nx.hero.rig.tiltSphere.position.y-nx.hero.rig.downSphere.position.y>5.5){ //console.log(‘UP’);
nx.camz.followCam.cameraAcceleration = 0.01; //0.007 //amount cam moves, 0 to avoid jank-.
nx.camz.followCam.rotationOffset = 0;
nx.camz.followCam.heightOffset = 22; //distance above: ground default-.
nx.camz.followCam.radius = 44;
}else if(nx.hero.rig.tiltSphere.position.y-nx.hero.rig.downSphere.position.y<4){ //console.log(‘DWN’);
nx.camz.followCam.cameraAcceleration = 0.04; //0.007 //amount cam moves, 0 to avoid jank-.
nx.camz.followCam.rotationOffset = 0;
nx.camz.followCam.heightOffset = 44; //distance above: ground default-.
nx.camz.followCam.radius = 30;
}else{ //console.log(‘FLAT’);
nx.camz.followCam.cameraAcceleration = 0.009; //0.007 //amount cam moves, 0 to avoid jank-.
nx.camz.followCam.heightOffset = 12; //distance above: ground default-.
nx.camz.followCam.rotationOffset = 0;
nx.camz.followCam.radius = 20;


  • a BABYLON.FollowCamera, with a utility-belt:
  • when hero goes up, lowers camera down.
  • when hero goes down, raises camera up.
  • when hero is flat, zooms camera in-.
  • when hero is idle, initFaceCam();
  • dampens frame loops by 10.
  • when cam changes, unregisters loop

CINEMATICS means a large collection of CUSTOM~CAMZ…-.

:eagle: : )

1 Like

Hi gang. It all sounds like scene.scheduler operations… to me.

Use it just like observers/actionManaging… scheduler.add/remove/register/unregister events, actions, triggers… all with a time factor. scene.timecode generator. :wink:

Camera anims/paths, mesh/light anims… all a different subject, really.

I guess I can’t understand the reason for all these capitalized fancy terms. Cinematics… in a BJS scene, is simply a time thing. Schedule this to start, then that, then wait for THIS actionManager to trigger, or THAT observer event, and let those do their stuff… but meantime… the scheduler is still ticking tocks… and doing new activities, and those could cascade into actions and observations.

The scheduler doesn’t care about callbacks at all… but it COULD… I guess. It could pause event-list execution… waiting for an observer or action/trigger to happen… then resume.

But it’s better to just let scene-time roll-on. It might be fun to use percentages. If your cinematic is scheduled to run 2 minutes total, then the events “on the scheduler”… are time-stamped to happen at some% of 2-minutes-total-run. :slight_smile: Fun.

Event Scheduler 1.0… which is really an empty BJS animation that ONLY contains animationEvents. :slight_smile:


Nice @Wingnut. I liked your early animation experiments. Glad to see you.

That is a good approach.

TLDR; We ended up with something different, and that is ok.

Sounds like a great experiment to try!

For anyone attempting this…

The challenge we had was in ordering many SEQUENTIAL and SIMULTANEOUS animations (ANMS).

And then moving them around without them breaking.


And how a new view on TIME began.


We renamed TIME, to duration (DUR).

Seeing that DUR is also SPEED, we use those two definitions frequently.

To my surprise … that began a replacement of TIME (for us).

We no longer needed TIMELINE or TIMESTAMPS. Just a bunch of sequences with


They are totally fun. Give it a try!

EXAMPLE: ZONES work really well for CAM-ANMS. :slight_smile:

Inspired by CUT~SCENES like in the movies (different in this context).

This approach makes TIME very flexible. It is not perfect. Pros and Cons for every approach.

Two challenges were 1) one-time-switch and 2) Pause.

ONE-TIME-SWITCH - guarantee ANMS called once. Inspired by old SINGLETON pattern.

Solution: a one-line-flag on FRAME (with comment) that looks like this:

if( frame[idx] && !frame[idx].init) { frame[idx].init=1; //one-time-trigger;

ONE-TIME concept turns out to be a very helpful syntax. Used often. Unavoidable.

PAUSE is easier with TIMELINE. So if you need that a lot - probably TIMELINE is better for you. : )

We didn’t need timeline much.

So we use lots of TRIGGERS and ZONES. And sometimes DELAYS, but not often.

ALL OF THE ABOVE is another DESIGN~PRINCIPLE that guides us to try every approach in an experiment, and then to adapt to whatever works best.

It often leads us into the different and unconventional.

And that is ok. :slight_smile:

The importance of the ALL-CAPS naming-convention is to give us CLARITY and PRECISION.

We plan to document more in 2020. Inspired by DESIGN~PATTERNS and UML.

The purpose is to simplify animation complexities.

Probably no one adopts it, but even if it helps one person - that would be great!

It helps my team communicate, and that is the main point.

We would like to see many people build animated sequences of all types.

Learning from different approaches and inspired to try new things.



Nod, thx.

“zones and triggers” being… space-based, not time-based. nod.

Space-zone-testing can be a challanging thing, right? You either use invisible zone-defining mesh (big invisible sphere surrounding zone, and you test constantly for intersect)… or Vector3.Distance(), yes? Both require constant testing. mesh.onDistanceRangeObserver.add/remove? heh. (we wish)

onDistanceRangeObserver… hmm. Make it versatile enough to check both within distance range and exceed distance range? (onEnter/onExit).

Ahh, distance-checking (enter/exit zone)… I dunno if we have all the good core-tools for that stuff, yet. But Vector3.Distance() and mesh.intersectsMesh(otherMesh)… don’t use mesh.ellipsoid and ellipsoidOffset, which are the currently-installed zone-defining tools.

MoveWithCollisions() is the only move-method that tests for mesh-to-mesh intersect on ellipsoids… and its onCollide handler is sort of hard-wired to an action (stop or redirect movement).

And moveWithCollision is impossible to use in standard BJS animation, and we have no stopAnimationOnCollision flag for our animations. So, ellipses and ellipsoidOffsets are pretty much worthless for space-zones… so far (other than their current job, which they do perfectly).

As long as you are happy with this mysterious yet-to-be-seen-in-playground thing, I’m happy for ya, aF. If it works good, and is easy to use/understand, and has great docs, it will automatically gain popularity over “time”.

aF… you say “we” and “my team” pretty often. Are you going to tell us who “we” is, and tell about “your” team? How much beer should I buy for the BJS-Cine playground roll-out party? :wink:

1 Like


Perfection is not required. Just a general ZONE to fire off an animation sequence.

A custom solution from an old pattern (abstract factory) zoneFactory (config).

Mixed with some callbacks and composable to other factories.

Principle: it is simple.

Colored zone boxes with opacity, attach return handle to editMaster(zone), to move around easily.


//DYNAMIC-ZONEZ: {WHEN:above/below,DIMENSION:x/y/z,VALUE:100,trigger:function}
//TEMPLATE : nx.camZoneFactory({pos:{x:0,y:0,z:0},h:0,w:0,d:0,alpha:1, hit:function(){ debugger; } });
nx.zonez.camZoneFactory = function( config ){ //USAGE:{pos:{},dim{h:0,w:0,d:0},alpha:1,color:{r:0,g:0,b:0},hit:function(){} }
// ZONE BOXES-DIMS. //defaults-.
if(!config.dim || !config.dim.h || !config.dim.w || !config.dim.d){config.dim = {h:10,w:10,d:10}; }
if(!config.pos){config.pos = {x:0,y:0,z:0}}
if(!config.color){config.color = {r:0,g:0,b:0} }
var zBox1 = BABYLON.MeshBuilder.CreateBox(“zonebox”, {height:config.dim.h, width:config.dim.w, depth:config.dim.d}, nx.scene);
zBox1.position = new BABYLON.Vector3(config.pos.x, config.pos.y, config.pos.z);
zBox1.visibility = config.alpha || 0.22;
zBox1.material = new BABYLON.StandardMaterial(“colorbox”, nx.scene);
zBox1.material.diffuseColor = new BABYLON.Color3(0, 0, 0);
zBox1.material.specularColor = new BABYLON.Color3(0, 0, 0);
zBox1.material.emissiveColor = new BABYLON.Color3(config.color.r, config.color.g, config.color.b);
zBox1.dim = computeZoneBox(zBox1,config.dim.h,config.dim.w,config.dim.d);
if(config.hit){ zBox1.hit = config.hit; }
zBox1.config = config; //ability for local functions called on config obj of implementation-.
nx.activeZonez.push(zBox1); //this is how to add zone to runtime loop (with damper).
return zBox1; //helps to save handle for reference of zone existing or not, and for editor-.

Feel free to put in PG. You will see the massive namespace.

The concept is inside for the person curious about web~cinematics to find it.

The one curious person… is who this is for. : )

1 Like


Another innovation was to see that 60FPS is waay more than needed for most RUNTIMELOOPs.

So developed the concept of loop~damping. Reducing to a minimum frequency.

Here was the result:

if(++nx.SmartCamLoopDamper%10!=0){return} //DECIDAMPER-.
if(++nx.SmartCamLoopDamper%100!=0){return} //CENTIDAMPER-.

DESCRIPTION: once every ten ticks, or 100 ticks… run logic… else escape.
Put dampers at the top of all RUNTIMELOOPS, for many SIMULTANEOUS ANMS to be performant.

Allow it to be RANDOM and it turns out to be fine.


1 Like


  • Added 6 Surfing tricks: spinL, spinR, rollL, rollR, flipF, flipB (all with smooth landing and no camera glitch).

  • Completed “Surfing Physics” by adding EdgeDetectMode.

  • We can now add lots of Blender ANMS into the various MODES.

  • Jumping through walls is always a problem. Solution to that was a WallRay.

  • We detect how close you are getting and slow approach until BounceBackMode.

  • SmartCams were improved/simplified with clearInterval.

  • SIMPLE-SHADOW - improved this PG to not occlude on the ground.
    Babylon.js Playground

word to the wise - just confirmed that link has a pretty substantial memory leak.

Let it sit for 3 minutes and it drops FPS from 60 to 40.

1 Like

I needed some part of this… so bringing it back for REFERENCE.

POSROTPATH surprises

  1. You probably want to RECORD A PATH (a few times) to make it easy to create complex PATHS. For this we attached it to our HERO, and RECORD IN THE BACKGROUND. A few nuances in doing that. You can startRECORD and clearRECORD and exportRECORD. Then the concept of PUBLISH, is when we apply that recording to code.
  2. Beyond that, the biggest surprise is not just the capability to VISUALIZE each colored LINE, but also to EDIT each individual point as well. We render little tri-state boxes +10y for each POINT.

Why would anyone need that??? We don’t need that…

Well we found out the hard way. :slight_smile: So we made all these surfing animations, PROUD, and then the fearless leader decides to double the scale of our halfpipe -oops! “Why are all the animation paths off??”, he asks. Word to the wise, if you ever change your racetrack… same problem. EDITING. It is just like WIDGETS - at industrial strength.

Example: it is important to distinguish between EDITS for each POINT(PNT) and EDITS for the entire PATH. For that we called it EDITMASTER and EDITPNT. Two separate editors initialized by a single factory function: editMaster( carPATH ). With the PNT EDITs generated inside it (click handlers, etc). Fully encapsulated. We also extended that factory to MESH and RIBBON… but that is a different story.

  1. When we RECORD a long path - like going around a full lap - it is a LONG ARRAY OF BIG NUMBERS. Too big! In two ways at once…

We noticed the precision was… waaay more than needed. Something like 0.00000000000001. So after some review we found that TRUNCATING to 3 decimal points worked fine for our cartoon. I’d bet 4 would work for you. Look for JANK. So we innovated a way to crop off all those numbers in both POS, ROT, and other META to save space in that ANM~REC object. Surprise! Saved a ton of space, but that syntax for cropping numbers is funny! Multiply/Divide. Also our dev editor needed to turn off some warnings and line wrapping. We also needed "Decomposition of Path POINTS btw.

If anyone knows a better TRUNCATE method I’d love to see it. I’ll post solutions for those curious.

  1. VISUALIZING, RECORDING, EDITING, and PUBLISHING - results is a WORKFLOW. So you might want to approach it that way. That’s how we arrived at CINEMATICS. Adopt the full workflow and you can re-use it time and again to RECORD, EDIT, PUBLISH and TRIGGER, many ANMS.
1 Like