Introducing Polymorph: An Open Discussion

Hey Everyone,

Today we’re very excited to “announce” a new area of exploration and interest for Babylon, we’re calling it Polymorph.

Introducing Polymorph

In short, Polymorph is an extremely preliminary proposal for a family of technologies designed to help solve the “Messy Middle” of asset creation pipelines and workflows.

Polymorph is early…a collection of thoughts. This thread will function as an open discussion about anything related to Polymorph.

If you have ideas, questions, suggestions…we want to hear it all!

Thanks!
@syntheticmagus and @PirateJC

8 Likes

Thank you BABYLON. :slight_smile:

“cusp of a renaissance”

Agree!

So thank you for this thread! And POLYMORPH!

removed.

  • Polymorph SCOPE
  • Polymorph ONTOLOGY
  • Polymorph STRATEGY
  • Polymorph ACTION PLAN

Thanks,

:eagle:


6 Likes

QUESTION:

How about: STRATEGY, PATTERNS, and ONTOLOGY.

The comparison to the node material editor really captures the ideal behind Polymorph I think. For inputs and outputs to be compatible, they only need to be of the same type. So how about having community/industry-provided types, all from the bottom-up, so that there would be a GLTF 2.0 typedef for example, which would be made up of different sub-types. Then, with also a standardized way of serializing types, the serialized data could be passed to binaries, scripts, and web services of all kinds. Bob, using Java, could use Alice’s decimator, which she hosts on Microsoft Azure, Charlie could use Bob’s texture compressor, and Alice could use Charlie’s geometry generation system that he wrote in Python. The morph programs would take in one serialized type and output one., e.g.:

data:khronos/gltf2;base64,TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQ=
data:math/quaternion;base64,eDowLHk6MCx6OjAsdzow

2 Likes

C++

  • C++ can be projected into something that JS can use. The reverse is more difficult. There are scenarios where native applications consume Polymorph (e.g. convert assets from one format to another) and doing it directly as C++ is the most straight-forward answer.
  • Low level manipulation is slow in JS. WebAssembly may be a solution, but it’s probably not as fast as pure native code. We can probably compile to WebAssembly for web use.

Thoughts?

1 Like

@bghgary

  • Well defined PROBLEM-SPACE 1st

  • PROBLEM-SPACE MAP 1st

  • Naming Convention 1st

  • Ontology 1st.

  • Abstraction Sets 1st

  • Design Patterns 1st

Then code. :slight_smile:

1 Like

Hi aFalcon,

I wrote that, so please let me clarify. I did not intend for that to be a negative comment on JavaScript, not at all. In fact, that entire section of the post was dedicated to talking about how less prescriptive paradigms—such as JavaScript’s “loosey-goosey” approach to things like data types, scoping, and encapsulation—can be extremely conducive to innovation and collaboration, sometimes even moreso than more prescriptive paradigms. I totally agree with you that Node and many other similar frameworks owe much of their success to the liberty and ease with which they can be used, and that JavaScript has been a major enabler of that. So, in short, my intent wasn’t to say anything bad about JavaScript at all, and in fact I think there’s much to emulate from it in order for us to make Polymorph a success. But more on that in paragraph 3. :smiley:

As @bghgary said, our main motivations for aligning (so far) on C++ were about portability and performance. Speaking specifically about portability, we wanted to allow Polymorph pipelines to be used both as independent utilities and as components of other pieces of software—including direct integration into native-level apps. Many different kinds of operations might be characterized as pipelines; and we didn’t want to force everyone (including native developers) who wanted to use Polymorph for those purposes to bundle a JS runtime along with their software. Like pretty much everything else about Polymorph, I think even this isn’t really set in stone, of course; the conversation is open. :smiley: That’s just the thinking that led us to what we believe so far. The choice of C++ over JavaScript had nothing to do with JavaScript’s “loosey-goosey” nature, and it certainly isn’t trying to pull Babylon away from any of the practices that have made it so awesome.

Speaking of which, if you haven’t already, take a look at the example of using pipeline.h in the appendix of the post. It is, of course, a C++ example; but if you look at it carefully (and especially if you look inside pipeline.h itself), I think you’ll find a very clear and positive influence from JavaScript in what we’re trying to enable with that. JavaScript is excellent at allowing you to shuttle data easily; C++ is excellent at allowing you to handle data safely; and with the mechanisms in pipeline.h, we’ve tried to combine the strengths of both. The pipeline context objects are actually directly inspired by TypeScript’s “duck typing” capability—a feature which exists on top of JavaScript’s “property bag” approach to objects—and the manymap type that underpins the contexts is really just a type-safe version of a JavaScript-style property bag. The reason for it being in C++ is two-fold: first (and, to be honest, less importantly), it allows the code to be genuinely predictable and type-safe at compile time; and second (and more importantly), it allows the pipelines to be used by, and even incorporated into, as many other software solutions as possible, including native binary applications and JavaScript utilities.

I hope that helped to flesh out any details I may have left unclear before. Thanks for your insight, input, and passion on this, aFalcon!

2 Likes

Looking…

@syntheticmagus

  • What is the LINE OF CODE of the JS inspired DATA-BUS?
  • And also “propertybag”?

I look carefully into pipeline.h and only see old friend: cout and stdLib. : )

We are not focused on the language. We created a prototype using C++. If C++ isn’t the right language, we will switch. The conventions, patterns, and abstractions are much more important than the language. We have reasons as @syntheticmagus and I have pointed out for using C++, but it’s not set in stone.

We will take all feedback into account. You can count on that! :slight_smile:

1 Like

I agree in general. Though it’s not super clear what same type means. Depending on the implementation/language/strictness/etc., the same type can mean different things. We had some similar discussions with the node material editor.

Yes, this is the goal. We need some kind of standardized conventions between the “Morphs” (i.e. nodes) so that they can communicate. Using glTF is probably not the right choice though. glTF is intended to be the last mile format and using it to exchange information from one Morph to another will probably be inefficient or lossy. We talked about maybe using USD in some way, but we’re not sure if that’s the right choice either.

Yes, ideally, this can all happen in one Polymorph using the same conventions.

@bghgary

It is hard to understand the SCOPE of POLYMORPH. : )

  • Is there a BULLETED LIST of SOLUTIONS that we would like to be covered? (defined project scope)
  • can we go ahead and distill distinct PROJECT GOALS?
  • will we start gathering Ontology (or not)?

Also I re-wrote the 10 SERIOUS PROPOSALS to be more concise.

Thanks,

like this?

4 Likes

I have no idea what this is!
The article was like Chinese to me.

We made some work at @naker (http://naker.io) on this subject. So I wanted to share with you how we manage the pipeline. You can see the code here: Naker Compression

The idea is to compress the model we receive in our editor which can be huge (creation asset) and turning it into a more friendly web model (consumption asset => I like those names!)

For that we use imagemin - npm in order to compress textures, on average we save 80% of the file size without losing in quality.
Then we use gltf-pipeline - npm and obj2gltf - npm in order to end up with only one glb file because we think this is easier to manage.

We thought about adding a step with model compression using Draco but we didn’t have the time to implement it. We were also very enthusiastic and met with the guys from Slight3D: Sligth3D on Vimeo but the project doesn’t seem to exist anymore which is a shame.

Completely agree with the choice of C++
Hope this will help in your thinking :wink:

2 Likes

But then how is all this gonna work? It’s all fine and dandy talking about it, but how are we going to actually take the first steps?

I get at this point it’s theory and that’s the point of the decision. But isn’t stuff like this decided by market share and acceptance by the user base once there is something to grab onto?

If we are talking about shapping industry standards then it’s going to take getting the industry on board. Maybe I missed some of the message in this but is all seems very hypothetical at this point.

2 Likes

DESIGN-PHASE STRATEGY

  • SCOPE
  • ONTOLOGY and MAP
  • PRINCIPLES
  • PATTERNS
  • FACTORY PATTERN (ex)

:eagle:

1 Like

If you intent is to have this work flow run in a browser, then it seems like a ‘step’ in the process could be implemented in C++, or JS, as long as at least a C++ header was written for one written in JS.

Also, speed is overrated for a back office tool. I am working on a back office tool which needs to perform a Short-time Fourier transform, allow me to edit the result, then run an inverse to get the modified input back. The biggest problem is integrating it into JS where the rest of the larger tool is. I do not care how long it takes. If it does its job, I can pull off something in real time which I could never do directly.

Another way of saying, speed really counts in the product, not a dev tool.

3 Likes

I read through the document posted by @PirateJC and could not help getting the feeling I was listening to a politician - seeing words like “consumption experiences” and "creation assets (CAD models, etc.) and consumption assets (glTF, etc.).

Then I read further to “Alice and Bob need mesh decimation” and wondered how that might work.There are a couple of decimators that I use when necessary Instant Meshes and MeshLab. The latter has all kinds of options that involve two menus and a popup box. I wonder how the proposed decimator will work?

And then there is a chair like this

How would the decimator handle that chair with all those tufted buttons ( that I would have created as instances) ?

And if you are going beyond just displays of models/items - and creating scenes - there is the issue of scale if original sources are different. And what file formats will be included for conversion to gltf?

Here is and example of models created with 3D scanning technology - end result a .stl file.

And here is a simple celebration I made from it for my grand daughter’s birth - 250,000+ vertices to 35,000 vertices using Instant Meshes, then creating a “dirty vertex” texture to emphasize the shadows.

Will such models be loadable and decimatable for Alice and Bob?

If you are going to display items on the web - then build for the web, not slash old catalogue models.

I hope that is not too negative - I just don’t see “one pipeline fits all” - but I maybe very wrong :grin:

cheers, gryff :slight_smile:

3 Likes

@gryff Im right with you on all of your points, so it can’t be too negative. It’s the issues that need to be addressed.

It’s like getting excited and telling everyone you are going to cure cancer, it’s all good in conversation but then actually executing the process is another.

But it all does start with motivation and conversations.

Some the process is identifying the problem, and we seem to have that covered. I guess now is more the time for conversations on solutions to these problems, I just see more questions then awnsers and kinda feel like others might be in the same boat.

2 Likes

QUESTION:

What is the SCOPE?

UTILITIES:

  • glTF
  • maya
  • unity
  • blender
  • other?

PROCESSES:

  • decimation
  • compression
  • other?

Happy Holidays.

:eagle: : )

1 Like