Maya Babylon Exporter offsets active frame before export causeing bounding box issues

When I try to export animation from maya using the babylon exporter to glb, Maya will change it’s current frame to an arbitrary frame in the timeline instead of the initial frame of the timeline.

This makes the exporter use the bounding box from that frame instead of the first frame, offsetting my animation in an unintended way.

Welcome to the forum !!!

Adding @Drigax to have a lot :slight_smile:

In the mean time do you have a repro you could share ?

1 Like

Hi sebavan!

Thank you for reaching out so quickly!
I uploaded the repro maya file here: Gofile

When you open the scene, select the geometry and when you export with the babylon exporter, make sure to export skins and export selection only

If it doesn’t happen too quickly, you’ll notice the frame in the timeline pops to the middle of the timeline, exports, then pops back to the beginning of the timeline.

In the Babylon web view, you’ll see the full animation but further back from the origin

My theory is that the C_ball_JNT joint in maya has zero scale in the beginning from frame 0 to the ~195

Babylon seems to jump to frame 195 before exporting the skin and calculates the bounding box from that geometry’s pose at that frame instead of the first frame in the timeline. This sets the bounding box to be the combination of the large ball at the center and the small ball geo bound to the ball joint. We want our bounding box to be based on where the geo is on frame 0 and not frame 195 in this instance.

I also noticed that the whole animation still exports so it somehow can deal with the ball joint being 0 scale for those first few frames

We need to fix the 0 scale issue on our end but it would be great if we could select which frame we want to use in maya so that the Babylon exporter can use the bounding box from that frame instead of trying to find a frame in the middle of the animation to calculate its bounding box.

Thanks again for your support!

np and thx @Drigax, you are on :wink:

That’s exactly right. for some reason we search the timeline for the first frame where all bones are at non-zero scale. why? I’m not totally sure to be honest. I assume its a half baked solution to finding the bind pose frame, or circumventing us calculating the inverse-bind matrix for our zero-scale bone. but I’m not totally sure…Our skinning export logic needs some work.

I’ll change the logic to instead use the first frame, but that also causes issues when the first frame isn’t the bind pose…

However it makes more sense than skinning off some arbitrary frame using arbitrary criteria.

Pinging @PatrickRyan if he has any insight as to why zero-scale bones are bad

Can you allow us to specify which frame to export from?

We would appreciate that level of control so that we don’t have to work around the exporter picking a frame for us.

You should always export from whatever frame you are using as your bind pose frame. I’m working on a solution to making that frame-independent, but in the meantime, I’ll just remove the logic that chooses that frame

1 Like

@Drigax and @Barak_Moshe, I’m pretty sure the zero scale on a mesh issue is due to the bounding box issues that come with it. Here is a glb that has a simple sphere mesh and a single joint that starts at scale 0 and scales up after 100 frames. If you drop this into the sandbox, the bounding box is derived from the zero scale mesh and the camera in the sandbox is positioned based on filling the viewport with the bounding box.

If you use the attached mesh below in the sandbox you will see that the camera is too close to the model and when it animates, the camera ends up inside the mesh. I don’t know the entire history here, but my guess is that when remix3d was a focus here we needed to make sure that their viewer would never have the camera end up inside the mesh. The viewer for remix3D was built on Babylon and was very similar to the sandbox.

For the issue at hand, if we are only worried about exporting meshes that are used in engine where the camera is set by code and not automated like in the sandbox, then we should be able to export at any frame. If we worry about breaking the sandbox, we should potentially look at how we can still support the automatic placement of the camera based on the unscaled mesh, but still export at the desired frame. (13.9 KB)

For some context on exporting from the bind pose, I will say that exporting from a frame different than the bind pose is not uncommon. I used to place my bind pose at a frame like -10 and then start my animation clips at frame 1 and export from frame 1. Most of the time, we don’t care about having the bind pose on a rig going into a game engine as they are likely a T-pose on a model like a character which has little use in engine. Keeping them in the negative frame range meant your DCC file always had access to it, but you could export clean files for each animation clip.

Plus, if your pipeline created one animation file for each clip, including the bind pose in each one was a waste of file size. We are always concerned with load time so that extra data in bind pose over and over adds up. And many studios would use one file per animation clip so that multiple animators could work on the same character at once and not stomp each other’s changes. So I don’t think we can guarantee that every model will be exported from their bind pose particularly if you want multiple files for animations.

1 Like

It sounds like our true solution is to export based on the unposed skeleton. Currently, we seem to be exporting our rest pose matrices based off their current value given whatever frame we are starting on, and also exporting the deformed mesh geometry given that posed mesh. I’ll have to double check, but I believe that we’re also exporting the node transforms based off of their transform in the current frame. not bueno.

If we do that, then we should still be able to handle exporting with the playhead at an arbitrary frame, and our mesh bounding box would be based off the original mesh geometry. we may still run into edge cases where extreme scaling of the rest pose will cause us to clip through the sandbox camera, but even our current solution would cause that, right?

1 Like

Bind pose makes the most sense to me. We were originally going to reset our bounding box to be based on the bind pose in our renderer but having it stored with the gltf feels more reliable in our view.

From an artist view, control is the way to go.

It would be really great for us to configure whether we want to use the bind pose or a frame to determine our bounding box. There is value on our end to pick and choose which one we want since we have animations that move around quite a bit and might want to use a bounding box that’s contained to the bind pose or a larger bind pose that’s representative of the most expanded pose in the animation.

larger bounding box*

I don’t know if that’s completely valid. glTF doesn’t explicitly allow for a bounding box, what we set are the extents of the data for vertex position, and whatever engine you import this data into, usually constructs the bounding box from this information. If you want a custom bounding box, it sounds like you may want to export a second mesh representing your desired box, and manually set it in your engine.

If we implemented this on the exporter side, we would have to export the mesh as deformed, and use that as the new rest pose, which will cause unwanted distortion when deformed, or fudge the accessor max and min values which may have all manner of unintended side effects.