Would be AMAZING to incorporate Alembic support. Some animations I’ve created used path deforms in Max that I converted to Alembic objects with animation. But neither seem to be supported. Alembic support would open a world of animation possibilities.
Arent those files extremely large potentially ? I wonder if there could be a GLTF equivalent in discussion @bghgary ?
I think we’ve had some discussions about this in the forums. I’m not sure it would make a lot of sense to support Alembic directly in a runtime engine like Babylon.js. My understanding is that Alembic is intended to be an interchange format between tools, supports a lot of different kinds of animations, and is quite complex. Supporting them all in Babylon.js would be difficult and probably slow. It would probably make more sense to bake the animations into a glTF, which is intended for runtime consumption, for Babylon to consume.
Yeah, depending on its use, they can be a little large. In my case, I’ve got an animated chain created via path deform binding in Max. Currently I cannot, for the life of me, figure out a way to get the animated chain exported correctly as a gltf or glb file.
As an alternative, can the Babylon exporter support point cache files?
We are unfortunately no longer working on the exporter, but we are still taking community contributions. @PatrickRyan Are you familiar with point cache files? Maybe there is a way to bake the animation so that it will export?
@bghgary and @DookieShoes, I don’t have experience with point cache files, but they are basically delta files per vertex and can be saved to XML. From there they could be converted to json with something like https://www.npmjs.com/package/xml-js so we could read it, but the inherent problem is that these files will be notoriously large depending on the complexity of the mesh and the length of the animation.
If the mesh is a chain, I am guessing it’s fairly dense in vertex count and could generate a quite large file even with a modest length animation. I think the equivalent here would be baking the animation to a texture, but it will likely be large as well.
One process that could work would depend on the complexity of the motion and that would be to translate the deformers into a morph target. If you move the deformers into one extreme, make a duplicate of the mesh, then move the deformers to the other extreme and make another duplicate, you can reconstruct the deformers into a morph target. You can add extra morph targets to help smooth out the motion, but morph target data also gets large quickly so be cautious how many targets you include. This would mean a reconstruction of your animation, however, since you won’t be able to directly reuse the deformer curve data on the morph targets.
Obviously, the cheapest approximation would be with skeletal animation and some clever skinning of the chain links. Something like putting the joint at the intersections of the links and rigid binding the link to the joint above it. It would be a complex joint chain due to the number of joints, but the links themselves wouldn’t deform. The animation would likely need to be done with some IK handles or a more complex rig that bends the chain all at once with a slider to prevent needing to animate individual joints.
Unfortunately neither of these alternatives are a quick fix for deformer animation and could be reason to look for other options like ingesting a point file, but the download time for the files may be reason enough to prioritize reworking the animation rather than trying to retain the work already done and find a path to ingest the point data.
Wow thank you so much for the detailed response. I like where your head is and I think saving to xml anc converting to json just might work. Thanks again!
Hi, just curious about the current state of ABC/OBJ flipbooks. Is there any official way to import animation with different pointcount and topology per frame?
@ncr, we don’t have any official way of handling vertex streaming (new pose or triangle list per frame) mostly due to the size of the data that would likely be needed. You could bring in one file with all of your meshes and then animate visibility on each to only show the active frame mesh, but we don’t have a feature that does this for you.
Ok, thank you!
Just a quick follow-up if anybody else is in a similar boat. I’m actually not interested in large amounts of data or big file sizes. I’m trying to animate a quick (five or six frame) lightning animation, where each frame is only about 100 polygons. So, the desire is more about different topologies per frame than a large amount of data.
Anyways, I just saw this in the GLTF standard and I think this might be a nice way – treat it as a skinned mesh, rigid bind joint N to frame N and scale it down to zero as a way of animating visibility on and off:
When the scale is zero on all three axes (by node transform or by animated scale), implementations are free to optimize away rendering of the node’s mesh, and all of the node’s children’s meshes. This provides a mechanism to animate visibility. Skinned meshes must not use this optimization unless all of the joints in the skin are scaled to zero simultaneously.
Does anybody know if Babylon performs this optimization of “do not draw” when scales are zero?
yup it does