REQUIREMENT2: MAKE a JSON Object that runs a movie…


For anyone who wants to try, here is a GETTING STARTED object…

We call it the EPIC~OBJECT, and you can call it whatever you want. : )-.

 var aEPIC = {
         aSeqIdx[10] = {on:1}; //ANMETHODOLOGY: AUTO-START-FRAMESET-on Sequence-.
         nx.runCinematicSequence("FinalSequence"); //COOL PATTERN for FRAME-SET, run Sequence-.


var aMovie = { 2:{epicID:2,name:‘Scene2’,initfn:function(){},runfn:function(){},endfn:function(){}} }

DESCRIPTION: Three simple…Callback-Functions: init, run, and end. That is it! You add anything.



Why did we choose number-keys? (more later, if you like).

GOAL: One thousand ANMZ encapsulated (neatly) in aSCRIPT of SEQUENCE and FRAME.

We think that is cool!

:slight_smile: You sure get excited over HTML canvas element, camera, and deterministic-lockstep.

I think most folk rather interact with a scene… than simply watch one in movie-like form. That’s a movie I made. :slight_smile: Sort of. Based upon time, and that’s all that “counts”, right?

Hey @JCPalmer… is that butterfly thing you made… still around? Got a fresh URL? That has some… “sequencing” to it, right? I can’t remember the details of that. Might be related to the mysterious POV methods, too? My memory fails me… but it’s always good to hear about related systems, right aF?

1 Like

REQUIREMENT3: Loop over an array that triggers animations encapsulated in objects.

Memento Pattern (Memento JavaScript Design Pattern with examples- with a twist.

SEQUENCE & FRAME … is what goes inside. And still not down to the babylon ANMZ (yet).


A movie FRAME pattern in JS, for short-form, run-time-rendering SEQUENCES in BABYLON.

No example yet. But FRAMES are neat.

REQUIREMENT1: everything directed by the JSONSCRIPT.
REQUIREMENT2: MAKE one JSON Object that runs that movie…
REQUIREMENT3: Loop over the frames and trigger animations encapsulated as sequences.
REQUIREMENT4: And that is where hundreds of BABYLON ANMZ go.

We did that…

Sharing the REQUIREMENT language might be clearer. Idk. It is very exciting.

So you are looking for something as scripted animations including sound and camera movement and speaking characters and maybe subtitles? Like as with in-game cinematic / scenes?

Similar to the following, but maybe with more options and ability to load from a cinematic script file:

Sound! Camera! Action! :movie_camera: :dancer::musical_note:


There is this for the butterfly. This is the result of a process I put inside my animation system, QI. I do not think that is what he wants. Anyway, I have abandoned GIF for high quality, up to 4K, WEBM (vp8 codec) that gets perfectly synced with the merged sound tracks and converted to H264 or BP9, using ffmeg.

I am able to do this, primarily because there is a master time control system inside of QI (no longer being published externally). That means it can tell the animations exactly what time they are supposed to “think” it is, and btw animations are ALL time based, not frame based, so 120 fps, 4k frames can be exactly generated. Probably at of about 2-10 per second. Here is a scene, which @Wingnut thinks I should know how to embed, but I don’t :grin: (I am a programmer, not a web page cake decorator).

Again, this does not look like what is being asked for. I actually think I am beginning to think about something similar, but I am not working on that part yet. Even when I do, it will not be near as generic / databasey, partly because QI meshes come from a source code generator coming out of Blender. These meshes can request to be generated as a subclass of anything they want, as long as the class existed when the constructor runs.

While there is a QI.Mesh subclass of BABYLON.Mesh, I can also write sub-classes of QI.Mesh which can hold particular animations, abilities & other methods inside them. In a music analogy, I can have a drummer, guitarist, and keyboard sub-meshes, then export out of Blender as anyone of them. A lot of work, but I can pull off shit you others would not even attempt.


Yes. We did that. So, we are no longer looking. Finding.


Sorry for my prior dreadful answers. It takes me a LONG time to find better words. : )

1 Like

QI? not familiar. Pls share.

REQUIREMENT5: ANMZ are placeholders for many types of ANMZ. BOTH Key-Frame and Interpolation based animations and others. Atomic, mixable, self-descriptive, movable, composable - easily in the FRAME structure.

REQUIREMENT6: ANMZ need ability to Syncronize, simultaneous, and sequential, AND guarantee one-time-initialization-. So as to always avoid one anm running twice (when it shouldn’t)(surprise).

After 2 years - we find a set of PATTERNS that work fantastically - for our requirements.
We share back… revealing results soon.
Who knows: Maybe you like!?!

Follow a Hero Journey in BABYLON

a Babylon5 world in BabylonJS.


RESULTS first and CONCEPTS second? Ah! I am slow.


EXACTLY! Also spot on with that link for “SEQUENCES”. That was a starting point.

pls support and rain :heart: s on BABYLON CINEMATICS? :slight_smile:

1 Like

There are no likes from me because as far as I understand it you and your team have created something called BABYLON.CINEMATICS and you have described methods and requirements. This sounds interesting but I have no real understanding of what you have achieved. Do you at least have a video demonstrating an example of what you have achieved?

1 Like

Thank you @JohnK. : ) I will make that improvement.

Below is a short screen capture running in the browser…

We animate from a MASSIVE SCREENPLAY (finished in 2010-2015).

This first episode was chosen, because we thought it would be easy. It wasn’t! But…

We would love to see anyone to do something similar - easily.

We think CINEMATICS might help teens learn to code BABYLON, movies OR short cartoons.

Ah jeez, there I go again…

The little grey squares is how we edit the ANMPATH in BABYLON (at runtime).
Before we save the POS/ROT/META as ANMZ object, for reRENDER (at runtime) from aSCRIPT, through the FRAMESTACKs, separated in SEQs. A Life-cycle Example. All Great Patterns!

Recorded with OBS Via @Vinc3r and @PichouPichou chat.
Thanks for that link! It was an integral piece of a fully-open-source pipeline to produce 3D movies.
We detail that “3DPipeline” (for some to attempt and share improvements). If you like. Of course.

@Deltakosh -> FIRST LOOK. For Tic-Tok 2020. :partying_face:
UPDATE: full-length-short in long_list_improvements_phase NOW. Very exciting.
We make many CONCEPTS to share-back. Here as rough-draft, polished around xmas.


Thank you. Creative~fuel

Just wondering, what happened to this?

1 Like

Thank you for asking @Null - you bring this back from dead.

Answer is I cross-post over to @labris work on DUDE-STORY.

LINK: DUDE STORY - Watch Tower - Episode 001

It contains an important addition. CINEMATIC-NAMING-CONVENTIONS

We found great benefit in creating a LEXICON for describing Cinematic-Animations.

EXAMPLE 1: We call ALL animations -> ANMs.

TLDR; Purpose of the text below is to capture some of the many NAMING-CONVENTIONS, a concept we found essential in PROGRAMMATIC-CINEMATOGRAPHY.

Off the top of mind…

The number 1 (surprising) takeaway recently:


using Theater and Movie metaphor in short capitalized bits.

We arrive at a heirarchy of:

CURTAIN, SHOW, [Manifest, Module/Asset], EPIC, SEQ, FRAME, ANM.

We think of it as an extension (or extrapolation) off of BABYLON.scene.

Cinematography gets confusing quickly. As seen below… the UPPERCASE letters is how we simplify.

OVERVIEW: we needed a STANDARD way to say:


That language (we found essential) to program PRECISION … cinematography.

For the old guys… the concept is similar to … UML.

  • Simple conventions above are CAMs and FOC and HERO. We call all cameras CAMS.

Simplification Example of CAMS:

initFreeCAM(); initFollowCAM();

  • And frequent use of Focus Targets is simplified to FOC (short for focal point).

FOC - was the confusing ANM that inspired the language.

We needed PRECISION ways to define and track and modify ANMs.

And it gets better!

The language enabled, us to go into difficult territory and label extreemely complex things… simply.

The most interesting… POSROT dynamic PATHS.

Objects we call POSROTS and POSANMs. Trust me, there is a lot to it… check it out!

  • POSROTPATHS - are JSON objects of position and rotation.

They can be very long, and ANM on SPEED. We’ve advanced them in many ways. Alll of which came as a surprise… every time (details below). In short, the following was unpredictable territory for us.

POSROTS are VISUALIZED (VIS) (with colored line) and EDITED (box) by a single line of code

Anything can be edited like this:

namespace.edit.masterEditor(any mesh or path). [inspired by gizmo]

Then at runtime, we follow a simple workflow to create many ANMs:

edit and PUBLISH POSROTPATH to console, then copy buffer.

paste the edited POSROTPATH into the code.

And comment out //masterEditor(path);

That is the workflow we use.

We extended this to work on RIBBON, PATH, and MASTER (position and rotation a mesh).

  • The surprises:
  1. we had to “decompose path” because too many points.
  2. We had to truncate the long precision numbers for shorter paths.
  3. Also sometimes we want to trim out ROTS for straight ANMs etc.
  4. Some FRAMES need TRIGGER. So we have easy way to trigger… any single frame.
  5. Meta objects on any ANM FRAME can TRIGGER any other ANM.

It is … fun.

PRINCIPLE: (everything precise and lightweight)

  • We also make ZONEs.

We make dynamic ZONES with a ZoneFactory.

In GameMode - ZONES often TRIGGER MOVIEs.


GAME-TO-MOVIE transition… (G2M)
And MOVIE-TO-GAME transitions switch back and forth.

There is also a few others. : )

The surprise with ZONEs?

Loading and unloading. Done per EPIC.

Principle: we dont want a single zone taking up loop-space that isn’t being used.

So we have a ZONE manifest concept. Unload everything, then load the manifest, each EPIC.

We call it EPICINIT() and EPICEND()… init/clean up ZONES, HEROs, PROPs, etc.

Other surprises…


For ZONE-TRIGGERS (and FRAMES) - there is and important simplification concept of ONETIME.

PRINCIPLE: ONE-TIME, never fire any ANM twice.

We do this with a single line of code (simple state flag) on the (frame or) trigger object itself.

if (!thisFrame.init){ thisFrame.init=1; startANM(); //ONE-TIME … }

For sequencing animations… it is used often.

  • TIME. We call it DUR or SPEED.

PRINCIPLE: Simplifying time… is good.

We didnt want every ANM to be based on DUR.
Alternatively, we emphasize: SPEED, TRIGGER, DONE, then if no other resort - DUR.

Because there are often, multiple TIMEs. Each easily confusing (hard to name - magic tokens).
And we strive for PRECISION.

EXAMPLE: Curtain Fade out time, Curtain Black time, Curtain fade in time.

curtainFIDUR(), curtainBLACKDUR and curtainFODUR().

  • Second example of avoiding time. Most HERO SEQs use TRIGGERs.
    Surprise there… new movements usually occur - after HERO is DONE… talking.

So we see often, SEQ-ANM TRIGGERS on TXT.DONE. Not time.

  • And relative SPEEDs.

Slow Mo is cool…
… for that we prefer reduced SPEED (not reduced DUR).
Subtle concept that simplifies animations.



With loops that LOCKOUT. Is an easy way to STOP TIME.
LoopLockout - have many ANMS in a LOOP, and easily stop all of them like this…


if(!movieMode){return} //LOCKOUT

  • and 60 FPS is ensured.

TLDR? Yep. But, PROGRAMMATIC-CINEMATOGRAPHY is a passion. So, for that person. I hope you try. I want to help you advance faster…


Programmatic-Cinematography is enhanced by a good NAMING-CONVENTION.

Interactive Movies in BABYLON are a certainty.


Can’t call it FILM.

Start with YOUR STORY. Then follow your JOURNEY.

Thank you for interest in BABYLON.cinematics (concept).




A simple NAMING-CONVENTION is the transition into programmatic~cinematography.

Because they become function in namespace.


So many more, we try:




We work every weekend on these CONCEPTS.

Here is the cutting-edge:

With Irony, for 3D, same concepts - no film.

: )

1 Like


If you are NOT watching FORTNIGHT LIVE EVENT Season 10…

Here is what happened:

The map just exploded knocking all the players out into space.

After a comet with rockets showered down.

And we were all sucked into a vortex.

“That is a fantastic shader!”, I said to my son.

And we are all staring at a BLACK HOLE, spinning in space.

After half an hour, we received “Rare Achievement”.

So we google to find out what is going on…???

Only to find out the servers are DOWN.

The whole family is waiting… for an hour now… for the next cinematic.

We think this is the BEST EXAMPLE currently in the future of media.

Except we tell the story with 3D~Web~Cinematics.

If by chance they choose to zoom through space, into a nebula, then down to a single planet… for the next story. That is the effect we are developing… between many, many, many, exoplanets.

Albeit without great shaders (yet).

Cross your fingers… will update whatever happens.


1 Like

@labris, since it’s just you and I. We might as well talk. : ) Lol.

The LIVE EVENT- it was a massive FAIL.

The climactic BLACK HOLE… it stagnated for hours.

We grew bored and left. Never to return.

But with minimal effort… it could have been… easily cataclysmic!

One Shader - well within our abilities.
And they chose a spinner…

A FAIL to be sure - but, not without insight.

The WEBCINEMATICS - there is more too it than we give it credit.

It is an untapped keg. There is something additional here.

What Fortnite is doing… is but a botched beginning!

And they dropped the ball today - with a massive THUD.


We can pipe unlimited visuals through the web - quickly and powerfully.

The only questions is - what~to~ publish???

Once you know what this is, with crystal~clear~focus… SeizeIt.

Look directly at your NORTH-STAR, and rocket~right~at-it.

So that you can make that vector - real.

Then I cannot pretend that the JOURNEY is easy. It is dreadful.
A dreadful, drudgery of toil. EVERY DAY - every Sunday (all year)
all the while… enlightening… every step.

Because of a deeper reason to choose it.

And I think… with every possibility that I am tragically flawed…
that given one moment to speak…
through such a powerful megaphone of cinematics on the web…
maybe, just maybe, the sense is that…

there is something_important_to_be_said.

This is what the FORTNITE LIVE EVENTS - continue - to miss.

And that, is worth the struggle.

: )

1 Like

@aFalcon well I could say a lot but for a bext couple of weeks will have to be short due to heavy loaded projects :slight_smile:

Media is the message; NEW media gives chances to NEW messages.
Or to implement old good messages in a new form.
3D Web is only in it’s beginning, but it already gives us infinite possibilities to communicate in a new way.
Before there were no tools for 3D experiences on every Web-connected gadget without additional applications.
Our flat screens and displays now can have additional dimension.
And quite soon 3D Web will become holographic, and we shall be able to create 3D models like real sculptors or artists - it will be another creative breakthrough… time for other forms of books, cinema, games etc…

1 Like

Being quite old myself I sometimes find grasping new ideas quite slow. Given that I know nothing about film directing please forgive the incorrect use of words. Does the following accurately describe what you are developing a new BJS CINEMATIC scripting language for?

  1. You start with a cinematic idea, for example - In a quiet street Jack and Jill sit on a bench talking. As they talk a UFO passes across the sky. While it crosses the sky they look up, remain still for a moment and then run off.

The scene contains props that do not move such as the bench and buildings, with actors that do move, eg Jack, Jill and the UFO

  1. You begin to break this down into directed timed sections for the actors and for the cameras. In my own very simplistic way:

Film sequence takes 20 seconds.

0 secs for 4 secs camera A close up on J & J talking
5 secs to 7 secs camera A pans out
7 secs to 10 secs switch to camera B wide shot
8 secs to 20 secs UFO flies across the sky
9 secs to 10 secs J & J stop talking and stand up
10 secs to 15 secs J & J stand very still
15 secs to 20 secs J & J run
10 secs to 12 secs switch to camera A pans in
12 secs to 20 secs switch to camera B and track J & J
12 secs to 20 secs switch to camera B

  1. Construct props, actors paths for actors and cameras (and lighting etc) in Babylon.js

  2. Produce a cinematic script in a language you are creating based on the language of cinematography that will produce the wanted directions that are in 2.

  3. You script will be saved in a JSON file, read by Babylon.js that will produce the movie.

If this is what you are doing then WOW what a big task and
a. is it just you?
b. if not how big is your team?
c. Is it a commercial project or will it be open source?



@JohnK and @labris

“3DWeb” and “Programmatic~Cinematography” and “Web Cinema”, and “MOVIE GAME BOOK APP”.

Public Domain words for the methodology. Please use.

Zoom into an “egg-shaped robot” zipping through space, this 2020.

End with an AMAZING-EXO-PLANET … on a single website.

An Epic Web Saga. Monthly(?) Like a comic book, but 3DWeb.

and we think you should build one too.

The answer is yes to comments above.

Yes we plan to OPEN-SOURCE and share~back… how it was done.

It is just a bunch of OBJECTS with FUNCTIONS in a LOOP.

There is a repeatable boilerplate. That extends from SCENE. But mostly a DESIGN PATTERN.

You could do things… very differently.


Sequence frames (SEQ and FRAME), with ANMS inside, in a LOOP at runtime.

That is the key. Another…

REQUIREMENT: we found the ANMS needed to be highly COMPACTED or ATOMIC (modular). Because in practice, they tend to move around from FRAME to FRAME and sometimes SEQ to SEQ -before finalized. So we design for that.

JKing - quite certain BETTER variations exist.

I am a SOLO~ARTIST (with help). Since YOU ask: we see much more for SOLO-ARTISTS with CREATIVE-CONTENT in the GIG-ECONOMY.

The purpose of those CAP WORDS - is to simplify the ANMS. Which we find to be naturally complex and easily confusing. So it is a NAMING-CONVENTION for functions and objects - at its core. To our surprise… it extends.

Inspired by UML. But entirely NECESSARY for ANMS. First to COMMUNICATE, but then…



See how YOUR complex SCENE can be SIMPLIFIED into a single readable sentence?

That is STEP 1.

STEP 2: We use that sentence to create the COMPRESSED ATOMIC ANM (that moves around to any FRAME. To EDIT them easily. That is STEP 3.

We try to limit TIME-TOKENS. Just like MAGIC-TOKENS. We find benefit in a PRINCIPLE, that says: Let ANMS interpolate SIMULTANEOUSLY, using DONE, ZONES, and TRIGGERS where possible.


nx.spaceZOOMSEQ[2] = {on:1};

That FRAME-TRIGGER, came from a DONE function, of a TXT, where some HERO… finished talking. And inside it is an ANM for something.

That’s how we did it.

This is top of mind. Sorry. EXAMPLES later. Must move quickly (7 days a week). Animating eye movements (again).

I try to WRITE BETTER. So thank you for PATIENCE with (long) BRIEFS. They get better. Slowly. And used to get the mind going - then crank out tons of BABYLON (and BLENDER) code inbetween fullstack JS mentor sessions. 3 Years soon. Very tired, but… very exciting too. Hard to explain and contain. Thanks.

Someday, as labris says... yes, we push MOVIE GAME BOOK APP to a website.

Then double-back and share~back, how it was done. So that anyone else can do their own.

BECAUSE that is what we would have wanted.

And CLAP when YOU do it BETTER.


We want everyone to have their OWN (successful) “WEBGARDEN”. Why not?

We want to see organic open-source “Web Cinematic Arts” ,

BECAUSE we dream in …3DWeb

  • many people making Web Cinematics in BABYLON.

It will happen. : )