Correct flow of creating complex mesh for babylon engine

Hey Guys,

I know that for unreal engine i should create base mesh in blender, import it to unreal engine and everything else i should do inside of engine(particles etc)

I’m trying to figure out correct flow of developing complex mesh for babylon engine.
For example i’ve created some model in blender and exported it to glb/gltf

From what i know no way to export particle systems/glows from blender to babylon. Am i wrong?
So i need to create them in code manually? Can i use Babylon Editor for that? should i export to babylon format after that?(from what i see it doesn’t support export to glb/gltf) or maybe i need to export to babylon format from the start(from blender export)?

Also i’m seeing that my gltf model doesn’t have animationGroups in Babylon Editor but have them in sandbox(and they are playable in sandbox). Is it a bug? Does Editor support animationGroups? It shows " No animation group linked to the object." in animation groups block

Could you please provide correct flow to obtain complex mesh with animations/particle systems/glowing/complex materials etc. what format i should use, what and where it should be done if i want to avoid writing of code for each model manually. The best outcome if i import model from file and it has everything what it should have.

Thank you

ccing @JCPalmer for exporter stuff

The JSON / .babylon exporter does not deal with particle systems in Blender. Except for shapekey animation it does build animation groups, but takes Blender actions to named animation ranges.

Have no clue what unreal does in these cases. Also, do not think there is such a thing as a complex material, per say. Using a principled node is your best bet for either exporter.


Ok then, at least i need info about why babylon editor doesnt support animation groups? Any plans to change it?

for example i need to create creature meshes for my game.
complexibility of them is similar to this playground

It have morphs, glowing, particles
Is there any way to do that without writing code or at least not so much of it via babylon tools?.

Babylon editor? Maybe we have some other tools which i don’t know?

I think you’re confusing the Babylon Editor ( Babylon.JS Editor ( ) which is a community project maintained by @julien-moreau, and the Babylon Blender Exporter ( Blender to Babylon.js exporter | Babylon.js Documentation ( ) which is maintained by Palmer.

1 Like

No, i’m not confusing about this tools. I just want to understand paradigm of babylon engine. What part of work i should do in blender, in editor,in node editor, in code to achieve such result as above.

For now the only way to achieve this is

  1. create in blender 3dmodel of weapon without everything
  2. manually create shader material in babylon node editor (too much time)
  3. write code to combine everything, add particles, glowing, morphs (too much time)

Am i missing something? Maybe I should do this another way?

1 Like

Hmmm, now I understand better your question. I think @PatrickRyan will be the best person to talk about the asset workflow.

@TooCalm, I can explain a general art pipeline for Babylon, but everyone’s particular pipeline will vary based on their preferred tools. The general pipeline for an experience like you are talking about… a weapon with a custom shader, glow, particles, morphs, etc. similar to the PG you linked above… looks like this:

  • Model your base asset in a DCC package like Blender, Maya, Modo, Max, Houdini, etc.
  • If you need morph targets, you need to create those in your DCC package as this is the easiest place to create them. And if you want animations for your morph targets, you should also author those here if you don’t want to write code to create the animations. Though you can certainly just create the animations for your morphs in code if you want.
  • Create your texture set in your preferred package like Substance, Mari, Quixel, Armor Coat, GIMP, Photoshop, etc.
  • Export your asset to glTF for most flexibility in use since many rendering engines can use glTF and many DCC packages can open glTF, though I would caution against using glTF as a transfer format as it does some things for optimization since it is a transmission format that you would not like if you use it as a transfer format (to share between artists for example).
  • If you only need PBR materials, you can load your asset and are basically done since the glTF format uses PBR-MR for its lighting model and we load glTF models applying the PBRMaterial as standard.
  • If you need a custom shader like the demo above, you will need to create it but you can use Node Materials and the Node Material Editor or you can write your own glsl shader.
  • If you need particles, you can use the Particle Editor in the Inspector to create your particles through an interface like you would in Unity or Unreal. You can manually write the code as well, but you don’t need to anymore. The Particle Editor will create a json file for you for your particle system and then you only need to load the particle system and run it.
  • If you need post process for things like glow, you do need to write some code to enable it, but the amount of code to do this is negligible.

You can use @julien-moreau’s editor to give you an editing environment if that is what you prefer but since that is a community project, he would be the best person to answer questions about the editor and its features.

I don’t quite understand why you are saying that creating a node material for your asset takes too much time since you have to do the same thing in Unity or Unreal. Creating a node material should not take more time than creating a custom material in either Unreal or Unity. The only difference is that you need to load the node material json or snippet and apply it to the mesh. The same with particles, post process, animations, etc. They all need to be loaded in the scene and applied to the mesh/scene to create your experience. This is no different than in Unity and Unreal except for the editor. We don’t have an official editor but Julien’s editor can be used in the same way.

The main difference in a web project from Unreal/Unity and Babylon is that all of those connections that you get in the Unity/Unreal environment do come with code under the hood. You just don’t see it. This is why deploying to a website comes with a player from those engines, which is not free in terms of performance. Both engines need to create a web-ready player which can come in at several hundred MB of download to use the experience. The demo above requires 51k of js in addition to maybe 10MB of assets, shaders, and textures, and 892k for the Babylon engine in its entirety, but you can tree shake out the parts you don’t need to make it smaller.

So while you are writing more of the code to create your scene with Babylon, it is still a much smaller amount than if you were writing WebGL in general since we take care of a lot of the low level code and abstract them for you. But going with an IDE to handle some of the code to assemble your scene will require a web player comes with added weight in download. If you are targeting mobile devices, or devices that do not have broadband access, you need to trade off the size of the player with the size of your assets. Note that the largest part of the download I described was the 10mb in mesh/texture/shaders. Almost 8mb of that are textures so I could optimize more with the textures to get the size down, but since the engine and scene are under 1mb I can make the choice to spend more on my texture resolution as I don’t have to worry about hundreds of mb of player that needs to be downloaded.

I guess the ultimate take away is that Unity and Unreal are both great engines that prioritize native experiences over web. Babylon is a web-first engine and we are building Babylon Native to take that same web code and create native experiences for cross platform development. In that case, it’s not a fair comparison as neither Unity nor Unreal were created with the highest priority being web development.

As an artist, I can understand the frustration of needing to write code to assemble your scene when you are used to DCC tools being a point and click experience, but I will say that where the web is headed in terms of 3D and the metaverse I think every artist should learn some of the basics around 3D web development. It can make you more employable as more and more companies are chasing after metaverse experiences and ultimately help your team make a better product as you can speak to the best ways to create assets knowing how the engineering team will create the scene.

I hope this helps, but feel free to ping me with more questions.


Hey @PatrickRyan, Thank you very much for your answer. I think it should be added to babylon documentation.

I have additional question about babylon format. If i just want to export mesh from blender and use it what format is better to use? gltf or babylon? We chose gltf because its files are much smaller than babylon. Could you please add info about babylon format pros and cons

Again thank you very much for your answer


@TooCalm I, too, almost always default to the glTF format for two reasons. The first is that the file format works in many different engines and has the support of many companies like Google, Microsoft, IKEA, Target, Amazon, and many more. A format that all of those companies agree upon and support means that you won’t find many places where the format can’t be used.

The other is the work the glTF working groups are putting into rendering consistency. It’s one thing to say that a file can be used in many different places, but if they all render differently when which is right? Rendering consistency is a large topic that will continue to evolve, but even the initial efforts here will really help especially in 3D commerce where you may see a product on several different online stores. Any inconsistency in representing the material properties of the asset can affect the buying decision and/or result in returns which lessens the customer experience.

That said, there are several benefits to the .babylon file format. The biggest one is that the Babylon file format supports most of the features of the Babylon engine so you can save a scene and keep data surrounding cameras, lights, shadows, meshes, animations, particle systems, and more. You can save a scene in the .babylon format right from the inspector:

This means you could set up a scene using the inspector and then save out to the .babylon format and just load that back in. However, that pipeline does come with some challenges in terms of iterating on a scene. Since most of your scene elements would be stored in the .babylon file, making changes would mean loading the file, making changes in the inspector, and then saving the file out again. If the code is exposed, iterating and creating more complex state management becomes much easier.

I think one of the best uses of the .babylon format would be if you had a template that several experiences all drew from. Say you have a set of node materials that all of your scenes use or a specific lighting or camera setup. You could create a scene with all of these parameters set and then save it out as a .babylon file. You could then load this file into any new scene and you would have all of your common parameters/assets already set up for your new scene.

Another reason to choose the .babylon format would be to make it harder for people to lift your assets. It’s not a silver bullet for protecting your assets on the web, but we all know how easy it is to grab any image from a website to reuse it. If your assets are saved with the .babylon format, there are no DCC tools that will be able to open up the format because it’s a scene format. You can open the .babylon file into a sandbox and then export to glTF to then be opened in Blender, but that process will be lossy. The glTF format does not support all of the features available in .babylon so you would be losing some data around the scene on that export.

And then opening it in another DCC tool does come with the problems that any glTF file being opened in a DCC tool has. Things like vertices on UV seams will be unwelded (this is an optimization for glTF) and no custom shaders will be present. So It will take some specialized knowledge to take a .babylon file and be able to use it to recreate your asset in another experience. Like I say, it’s not impossible, but may be enough of a deterrent to make someone looking to swipe assets look somewhere else.

But it really depends on your use case. We built our release demo Space Pirates using glTF entirely, even though we added custom node materials to parts of the assets. A lot of the assets could be used with the standard PBRMaterial so using the glTF format meant we could just export the asset to glTF and only need to touch parts of the asset to assign custom shaders. It really does depend on your pipeline and what you see as the important parts of that pipeline.


Thank you very much for your answers. Very helpful


@PirateJC I would to second the idea that both of @PatrickRyan’s answers would, with a little editing so that is is not answer specific, make a good addition as a general workflow description.


Agreed with this! I’m adding it to the growing list items to tackle with our next doc rev. :slight_smile:

1 Like