The comparison to the node material editor really captures the ideal behind Polymorph I think. For inputs and outputs to be compatible, they only need to be of the same type. So how about having community/industry-provided types, all from the bottom-up, so that there would be a GLTF 2.0 typedef for example, which would be made up of different sub-types. Then, with also a standardized way of serializing types, the serialized data could be passed to binaries, scripts, and web services of all kinds. Bob, using Java, could use Alice’s decimator, which she hosts on Microsoft Azure, Charlie could use Bob’s texture compressor, and Alice could use Charlie’s geometry generation system that he wrote in Python. The morph programs would take in one serialized type and output one., e.g.:
C++ can be projected into something that JS can use. The reverse is more difficult. There are scenarios where native applications consume Polymorph (e.g. convert assets from one format to another) and doing it directly as C++ is the most straight-forward answer.
Low level manipulation is slow in JS. WebAssembly may be a solution, but it’s probably not as fast as pure native code. We can probably compile to WebAssembly for web use.
I hope that helped to flesh out any details I may have left unclear before. Thanks for your insight, input, and passion on this, aFalcon!
We are not focused on the language. We created a prototype using C++. If C++ isn’t the right language, we will switch. The conventions, patterns, and abstractions are much more important than the language. We have reasons as @syntheticmagus and I have pointed out for using C++, but it’s not set in stone.
We will take all feedback into account. You can count on that!
I agree in general. Though it’s not super clear what same type means. Depending on the implementation/language/strictness/etc., the same type can mean different things. We had some similar discussions with the node material editor.
Yes, this is the goal. We need some kind of standardized conventions between the “Morphs” (i.e. nodes) so that they can communicate. Using glTF is probably not the right choice though. glTF is intended to be the last mile format and using it to exchange information from one Morph to another will probably be inefficient or lossy. We talked about maybe using USD in some way, but we’re not sure if that’s the right choice either.
Yes, ideally, this can all happen in one Polymorph using the same conventions.
We made some work at @naker (http://naker.io) on this subject. So I wanted to share with you how we manage the pipeline. You can see the code here: Naker Compression
The idea is to compress the model we receive in our editor which can be huge (creation asset) and turning it into a more friendly web model (consumption asset => I like those names!)
For that we use imagemin - npm in order to compress textures, on average we save 80% of the file size without losing in quality.
Then we use gltf-pipeline - npm and obj2gltf - npm in order to end up with only one glb file because we think this is easier to manage.
We thought about adding a step with model compression using Draco but we didn’t have the time to implement it. We were also very enthusiastic and met with the guys from Slight3D: Sligth3D on Vimeo but the project doesn’t seem to exist anymore which is a shame.
Completely agree with the choice of C++
Hope this will help in your thinking
If you intent is to have this work flow run in a browser, then it seems like a ‘step’ in the process could be implemented in C++, or JS, as long as at least a C++ header was written for one written in JS.
Also, speed is overrated for a back office tool. I am working on a back office tool which needs to perform a Short-time Fourier transform, allow me to edit the result, then run an inverse to get the modified input back. The biggest problem is integrating it into JS where the rest of the larger tool is. I do not care how long it takes. If it does its job, I can pull off something in real time which I could never do directly.
Another way of saying, speed really counts in the product, not a dev tool.
I read through the document posted by @PirateJC and could not help getting the feeling I was listening to a politician - seeing words like “consumption experiences” and "creation assets (CAD models, etc.) and consumption assets (glTF, etc.).
Then I read further to “Alice and Bob need mesh decimation” and wondered how that might work.There are a couple of decimators that I use when necessary Instant Meshes and MeshLab. The latter has all kinds of options that involve two menus and a popup box. I wonder how the proposed decimator will work?
How would the decimator handle that chair with all those tufted buttons ( that I would have created as instances) ?
And if you are going beyond just displays of models/items - and creating scenes - there is the issue of scale if original sources are different. And what file formats will be included for conversion to gltf?
Here is and example of models created with 3D scanning technology - end result a .stl file.
And here is a simple celebration I made from it for my grand daughter’s birth - 250,000+ vertices to 35,000 vertices using Instant Meshes, then creating a “dirty vertex” texture to emphasize the shadows.
Will such models be loadable and decimatable for Alice and Bob?
If you are going to display items on the web - then build for the web, not slash old catalogue models.
I hope that is not too negative - I just don’t see “one pipeline fits all” - but I maybe very wrong
@gryff Im right with you on all of your points, so it can’t be too negative. It’s the issues that need to be addressed.
It’s like getting excited and telling everyone you are going to cure cancer, it’s all good in conversation but then actually executing the process is another.
But it all does start with motivation and conversations.
Some the process is identifying the problem, and we seem to have that covered. I guess now is more the time for conversations on solutions to these problems, I just see more questions then awnsers and kinda feel like others might be in the same boat.