Babylon project help

I think it’s by using the decal map. But I asked @PatrickRyan to provide his insight on this because I’m also not sure just how exactly (or even if it can be used in this case scenario).

@babyloner and @mawa, apologies for the delay in my response. I just got back from holiday and am catching up with the past two weeks activity here. If I understand the thread correctly, this is a project for an experience where a user can place an image within a “printable area” of a 3D representation of a product to be produced through a traditional sublimation process on a physical product.

If that is truly the case, I would avoid decal maps. This is because decals are used to apply a texture to a mesh regardless of the UV map of the mesh. This is accomplished by projecting the image within a cube intersecting with the mesh. This means that if the mesh surface is curved, the projection through a cube will result in some distortion as the surface curves away from the projection. You can see what I mean if you play with this example using decal maps. If you click toward the edge of the sphere, you will see how the projection stretches around the sphere. Due to this, you would not want to use decal maps for this purpose.

Indeed, the dynamic texture option that I have seen in several examples, including from @carolhmj, would be the way to go. This is because the process of inserting the image in UV space most closely matches the real-world production process of applying print to an object. The texture in UV space will not have distortion, and preparing your digital product to eliminate distortion when rendered on the mesh would be necessary to give your users confidence in the product they are ordering.

The best approach here would be to create multiple UV sets for your product meshes. One UV set would contain the UVs for the product materials. For example, if you have an aluminum water bottle, the base UV set would contain your base color, normal, AO, and Metallic-Roughness textures to make the bottle render correctly. Beyond that, you would want additional UV sets for each printable area of your product. Since there are real world limitations for the printable area, you will need to transfer those to the mesh UVs. For example, if you have a printable area of 3cm x 2cm on a water bottle, you would add resolution at the size of the printable area and add only the printable faces to the UV set normalized from 0-1 in U and V. This means you will only map to a printable area and anything set in the dynamic texture will fall only within the printable area. You would do this for every printable area on your product.

You will also need to add some information for the size of the printable area. This could be handled in meta data of the glTF, in the extras on the mesh, or in an external file with printable areas for all of your meshes. This is so you can make sure you apply the correct ratio to the image when applied to the mesh. You could also scale your plane to match the ratio of the printable area to help with the UX so the user can identify what area they are working with. But then you would also need to apply that ratio to the image displayed on the plane as well.

The last part would likely be a node material to blend the added texture with the base material. You will need to ensure that there is an alpha value on the image added to the base material to correctly blend them, but if you allow users to upload image, you would want to walk them through the right kind of file to use (like a transparent png) as well as best suggestions for image size, etc.

It seems to me like you are most of the way there and just need to build out some more capabilities into your product meshes and the UX, but you are on the right path. I hope this unblocks you but please don’t hesitate to ping back if you have more questions.

3 Likes

Thanks a lot @PatrickRyan for your insight that confirms my initial approach to this and sry for disturbing your holiday (hope you had a good time :beach_umbrella:)

So, in resume for the above ‘design-thinking’ phase, if I understand correctly, it would sound something like this:

  1. dynamicTexture would be used for the edit/printable area. The user would interact with a (buffer of) the dynamicTexture. All imported images and text would be drawn to the dynamicTexture. The dynamicTexture would have a preset resolution and a ratio that matches the production file.

  2. Either UV sets or separate mesh/submesh would be used for applying the textures on the 3D model.

  3. Models would include either metadata or an external parameters file to handle UV projection and limits.

  4. The ‘drawing’ 2D canvas/plane would display a mask/overlay of the printable area or would simply exclude anything that is not in the printable area/UV set for printing.

  5. Depending on the choice of using either mesh/submesh for the printable area or work with UV-sets, a node material would make the final blending for the 3D representation.

  6. In terms of UX, CX and requirements for production, a choice will need to be made between two options:
    6a) In case of a production environment, where the user/client would be requested to deliver the final version format of design for production (without intervention) - constraints towards image size, resolution and format would apply and should trigger conditions for accepting or rejecting user inputs.
    6b) In case of a ‘preview’ environment, where the user/client submits ‘a project’ and ‘a draft’, further pre-prod handling of the order needs to be implemented, requiring ‘human intervention’ to check the order and assets delivered by the client, normalize them for production and in case, make contact with the client to receive suitable asset for production or receive approval from the client to normalize the delivered asset for production. From my experience, this is something that can be done when working with B2B, corporate clients. It does not seem suitable for private clients (my opinion only.)

Yes, either by having multiple meshes for your inserts or multiple UV-sets to apply to your submeshes.
Or by using a node material (and the metadata or external file with coordinates) to have a node material create the final blend and positionning of your UV-sets/textures.

Thanks you @PatrickRyan and @mawa for your help.
Since I now have understaning of how the project should be done, I can start working on it more serious.
But I have questions which seems to be important in case to do this.

  1. How do I set UV for printable area on a real model (I have seen on custom meshes but not on a real)?

  2. How do I set multiple dynamicTextures on a model (I tried to use subMeshes but this is what I got, I dont understand how to position dynamicTexture on a material where I need to provide position for the each subMesh, or I am missing something)?

By understanding this I could esier do the job.
Btw, I am not so experienced in babylon so if some questions seems to be easy it’s just missunderstanding from my side.

I am not sure to understand your question just exactly? But if you are asking, how should I deliver the file used for production/printing, it is always a simple 2D layout/texture. It has a size/ratio and a minimal resolution (+eventually other restrictions such as a minimal size for typo and of course, color management). Sadly, all of my templates are on disconnected from the network archives, since I didn’t do anything similar in the past 5 years. But basically, your client - the printer of these merch items - should have templates for each product similar to these (quickly gathered from the web):


So, in case of this mug customization, you are supposed to a deliver a file/image from the dynamicTexture of a size of 19x8cm at a resolution of ‘undefined’ here (say 200lp). You will have some markers (cropmarks) to help align on the object. You do not care about UVs for your production file. No matter the shape of the object, the production file is always 2D. Eventually the printing app wil translate it to 3D but that’s rather fancy (and not your problem).
Height, width and resolution is all you need to know (plus where to place your markers for production, just outside the printing area)

1 Like

I understand that now.
The next thing that I am not sure about is how do I use dynamicTexture here?
Do I must use multimaterial with subMeshes?
If that’s the case I am confused about how to position these sub meshes along the plane dynamically.

I don’t think you should use multimats. Eventually, this is where the node material would kick in (if you want to work with submeshes and UV-sets).
But first, let’s look at another case from the ‘reverse-design’ thinking for an object that has multiple printing areas. I’m gonna avoid clothing for now although it basically works the same on the side of production. Fact is you do not necessarly print everything in a single pass. Quite often, depending on your printing machine, you can’t. And, on the user side, if you would present the entire unwrap, the user would have to rotate things around to match the direction of the face.

Say you have a ‘premium’ pen customization, where you can customize front and back. If you were to make this as a single dynamicTexture, the user would need to align his logo or text facing up on the front-side and on the back-side, rotate everything 180°. That’s ok for people working in production but not for the end-user. So, in this case, I would make use of two dynamic textures/inserts. The user customizes the front and the back separately. Then, the node material (or separate meshes for both inserts) would make for the final 3D representation. On the side of production, two files would be delivered from the two dynamicTexture (one for the front and one for the back) which I believe is the common way of delivering this (though you would need to check this with your client first).


I would also separate each printable are and make active only one of them at a time.
So I will need material for each face, but I can provide only one dynamicTexture per material.
What if we have a larger product and customer would like to insert more images in a particular area, what should I use in case to let the user provide another dynamicTexture to the same material of that area?

An example being worth a thousand words, I will try find some time tomorrow morning to start just a rough example/PG. I cannot commit but I will try…

I already did something similar in PG if you need it.

1 Like

Oh, that’s great. Sure helps to already have this base :smiley:

Ive done this before and have seen others app like it , normally you handle the manipulations on a canvas element. You can use other 3rd party engines for this that make working with the 2d canvas much more simple.

You can add as many layers of images or vector objects or text etc into such an engine for the target canvas.

Engines like :

pixijs
easeljs

That canvas is then always just pushed/updated into the 3D engine as a single texture for a single channel of a material

Ive used both these engines before for this exact purpose :wink:

1 Like

Did I understand you clearly.
I can use one of these engines as canvas, and just provide that canvas as a texture to the mesh?

yup :wink:

I never knew for something like that.
Thanks you, I will try it.

@babyloner, in reading the latest replies, I was reminded of another thread that was asking similar questions. In this case the problem was adding a custom decal on a skateboard deck, but the problem is largely the same here in terms of blending the multiple textures together. I went through some of the UVing considerations and worked up a simple node material to show how to blend multiple textures together. You would still use the dynamic texture approach from before and assign those textures to the texture block in your node material.

The reason to blend these all in a shader is that you still want to show the material of the object you are putting your image on. In @mawa’s example of the mug or pen, you can still see the substrate material in the negative areas of the image like the counters in the type or the negative space in the logos. If you were to use a multi-material, you would still need to have your substrate textures blended, so it literally does no good to use it as it can’t blend the substrate material and your graphic together on its own.

You could float geometry slightly off the base model and just apply your dynamic texture to that, but you would be met with a lot of overdraw costs and depending on how close your printable areas are to one another, potentially some sorting issues.

Blending your base material and your printed graphic in the shader will also allow you to do things like change the roughness of your base material in the silhouette of your graphic so that you can simulate ink on a glossy surface for example. This will benefit the specular highlights of your render and make the graphic feel like it’s applied to your product as it would in the real world. Since you have control over each channel into the lighting calculations, you could simulate a foil stamp by controlling metallic values, you can control specularity by controlling roughness, and you obviously control the base color by blending your graphic on top of the base color. All of these parameters will make your product feel more real to the user. I hope this helps clear up some of your questions.

2 Likes

You are right, I need my 3d products to be as much realistic as real products.
Since I havent used node materials, I will have to take a look, but I already saw some threads about multiple textures in node material so it will be worth trying.
Is texture block what you are talking about?
Also I saw some examples and I am wondering how do you position the texture on the mesh (I was looking at this example)?

Yes, it is.
Since you now have two of the best specialists following your case, I won’t add to it at this moment. I shall keep an eye on it anyways and you can cc me if you’d like to have my input.
Meanwhile, have a great day :sunglasses:

1 Like

is there any example how you provide the pixi container as the texture of a mesh?