@Lucio_Freitas, the issue you are seeing here is that Marvelous Designer will do a simulation on each vertex so what you are seeing in the Max viewport is that each vertex has a new vector3 position stored for each frame. As you can imagine, this is a TON of data to run the simulation.
What is supported by glb for animations is skinned animation (a mesh skinned to a skeleton), node animation (an entire mesh object with animations to translation, rotation, or scale), and morph target animation (a mesh that has one or more targets that have been deformed from the base mesh so that it only holds data for the start and end position of each morph target allowing an interpolation between each). No other animation type is supported by glTF at the moment.
The reason for limiting the types of animation in glTF is that the goal was a runtime file that was as small as possible. Node, skin, and morph animations are fairly small because they limit the amount of data to displace vertices in your mesh. This is why the mixamo model works and your simulation does not. The mixamo model has a mesh skinned to a skeleton which holds rotation and some translation data. We only hold that data per bone, and the skin is just a per vertex value that shows an influence between the bones that affect its position. This is basically a texture lookup for the mesh so none of that data is changing per vertex, per frame.
Right now, we don’t support any type of mesh streaming, which is what would have to happen with a mesh simulation like that. The best thing to do with Marvelous Designer would be to simulate your cloth over your model, but then bake that down as a static mesh with folds and volume mapping the simulation on your base model. Then skin that static mesh to a skeleton so that you can deform that mesh with your character. But you would lose any kind of flow of your cloth over your base mesh that is derived from the simulation. You could also make use of morph targets for a little motion in your cloth and blend skinned and morph target animations, but it won’t get near the quality of a full simulation.
As with all real-time engines, you can do some simple physics-based cloth simulations for something like a cape or scarf that can be represented with a few number of vertices, but that simulation can get taxing on low-end devices.
Sorry there isn’t a better answer here, but glTF doesn’t support mesh streaming and while we are interested in that for Babylon, it is still in the idea phase on our team as there are several pipeline concerns to get it here.