Babylon project help

@babyloner, in the example you show the image is placed through a combination of UV coordinates and how the texture is authored. In the case of the image on the skateboard, the image size is the same high as the rest of the texture set. This means that the image, which is a smaller height than the fill image can be placed anywhere within the texture and it will take that position on the deck.

This is the same concept as what you are doing with the dynamic texture where you are adding a smaller image inside a dynamic texture. This approach is sub-optimal from the standpoint of memory as the image on the deck has a lot of unused pixels that still need to be downloaded and stored in memory. But in the case of the deck, may be a necessary tradeoff if some versions of the image applied use more of the deck surface. In this case, I believe the end product was not customizable by the user beyond choosing the artwork on the deck so custom placement was not needed. Additionally, having a template for the entire “artboard” of the deck does streamline the art pipeline as the art team can use a single template for that deck to create whatever art they need.

In your case, since you have a specific area you are working with, you will need to use your UVs to define the area for the graphics you want to apply. And because images are far better than words, I quickly threw together an example. Let’s say you have a metal tumbler that you want to use as a product and there is a specific imprint area you have to work with.

You will need a metal material to show what the tumbler looks like, so you need to UV the sides with a tiling texture so there is no seam. One approach is that you need multiple UV sets to maximize texel density to handle the tiling side walls and the bottom of the tumbler/inside walls separately. Maybe you need an anisotropic pattern on the bottoms for example. You start by UVing your side walls in one UV set and end up with a layout like this which will allow tiling.

Then create a second UV set for the bottom of the inside and outside with UVs like this:

Then you will create a third UV set just for the imprint area. This will have to be translated from the specifications of the imprint area and you will need to place edges along the perimeter of the imprint area so you can UV the exact area.

Once you have all of the UV sets ready, you can assemble your texture by assigning the correct UV set in the UV block of your node material. There is a drop down in the parameters of the block and it can support up to six UV sets.

This approach will allow you to use the dynamic texture and assign it to the texture block that is being fed the UV set that contains your imprint area. The proportions of your imprint area need to also be taken into account as the dynamic texture will just fill the 0-1 space, so you will want to multiply of of your UV components by the ratio of the imprint area to make sure there is no distortion.

I have been working on documentation for UVing for Babylon.js this week and I should have something up in the next few days which should help you here. But let me know if you have any specific questions about the process.

1 Like

Since I now know how to make products for the software, I have questions about executing it in the code.

  1. If I have multiple printable UV’s, I would have to connect each of them with new textures (images), do I have an id of the UV which I can get in the code so I can specify where I want image to be?
  2. I saw putting an id to the added texture, how do I reference to that texture later in scene?
  3. If I copy a textureblock and paste it with different values it will not make another image. Am I able to fully manipulate with node material from the code (add as many textures as user wants, change their properties,…)?
  4. Would React.js be right to work with?

I have this basic playground, and I am wondering how to I connect the texture to node material.

I havent worked with node materials so if something seems to be easy for you it’s just missunderstanding from my side.

Also thanks you for leaving your time for helping me.

Edit: By experimenting I found out how to connect the texture. The question I have about the textures is:
What should I use to connect more images (with lerp I see left and right, in this NME there are 2 lerps)?

@babyloner, here are some tips, but please let me know if anything is unclear.

  1. Your product should have a specific node material made for it. You can generate them on the fly based on the needs of the product in code if you want, but it seems to me that creating one node material that can handle a specific range of products would be cleaner. Say, for example, you have a variety of drinkware (mugs, tumblers, water bottles, etc.) that all have two imprint areas based on the dimensions of the product. You can create one node material to handle all of them, and pass any ratio needed for the specific product’s imprint areas to the node material when you load the mesh. When you create the node material, you will set the UV sets for each image in the mesh.uv block, they are in the same order as you authored them, but when setting up the node material it is easy to see the results as you change UV sets. Then you just need to name your texture block with the correct name like “frontImprint” and “rearImprint” since you will simply assign the correct texture to each texture block in code. You shouldn’t need to reassign any UVs at this point. Just call
    nodeMaterial.getBlockByName("textureBlockName").texture = newTexture

  2. I’m not sure what you are referring to when you say “putting an id to the added texture.” If you are creating your dynamic texture in code, then assigning it to the node material, you should already have a reference to the dynamic texture. However, if you need to get one of the textures from the node material again in the future for any reason, it’s the same as above:
    getTexture = nodeMaterial.getBlockByName("textureBlockName").texture

  3. I assumed if you were adding multiple graphics to each imprintable area, you would be adding them to the dynamic texture rather than adding multiple dynamic textures. However, if you want to add blocks by code, you certainly can. There are detailed instructions on how to do that at creating a node material using code.

  4. You can certainly use React.js and Babylon.js together. Our GUI Editor uses React.js for the UI layer and contains a Babylon.js canvas within.

I updated your playground with the connections you need. You need to make sure you are specific about the inputs and outputs you are connecting as well as ensuring you have all required blocks for the shader to compile. If you don’t have a mesh.uv block connected to the texture block in your node material, you won’t be able to compile the shader.

In terms of adding textures to the node material, you will need to have some way to mask out each new image. Lerp is one way. Another way is using Logical Operators which can be used in conjunction with mesh parameters like in this example which uses the values of the mesh uvs to separate the faces into colors. There really is no right way to do this since you have a lot of variables you can use depending on your pipeline. These are just some ways to do it.

I hope this helps, but I would also refer you to the Node Material doc page as there are a ton of video tutorial on many different topics on this page. Node Material is one of our most covered topics because the tool can do so much. Let me know if you have more questions.

1 Like

I understand it better now, Thanks you.
I still have some questions but as you said there are a lot of videos to understand the material better.
But since we are in different timezone I would ask you a quick:

How would I be able to add multiple images into one texture?

@babyloner, for dynamic textures, you have complete control over how many elements are written. Just as a quick example, I expanded @carolhmj’s example to place two copies of the image side by side on click by writing the image to the dynamic texture again with an offset. This is just a simple example, but you could just capture the position of the click for each graphic added to the dynamic texture and each time you refresh, you draw each graphic into the texture. This way you can add as many graphics as you want, but you still only need one texture applied to your node material.

There is some complexity you will need to work around for a good UX as the user may want to move each graphic around independently irrespective of the order the graphics were added. You may want some UI around layers so that you can force the user to click on a graphic to make it active for manipulation to help you understand what parameters need to change. This will allow you to be able to write them into the correct position and order if you want your users to be able to change the layer stack order of the graphics.

I hope this helps with some ideas.

You really helped me understand this a lot. Also you solved one of my previous problems by this.

About that would be my next question:
I am trying to find the added images from different objects in the playground you provided, but I dont find any array with the images.
If I would have a storage for the images I could set an id for each and just go through that array and find choosen image.
What would be the way to find the graphics from dynamicTexture?

Also on adding multiple images I should not have ctx.fillStyle = "red"; and ctx.fillRect(0,0,1024,1024); because it doesnt let me add more than one image, but if I dont add it the color of the material will be black.
I updated the playground to show you that (the sphere and plane will be invicible unless you click on plane).

Hi,
I can see you are making great progress :clap: with the awesome, yet sometimes complex and complete :wink:, input and support from Patrick :smiley:.

For the above, without disrupting the process, I think I can safely say that you should have a default material on your plane and object before customization. I believe on the object it would be just the plain material of uncustomized object (the node material without the added textureBlocks). As for the plane/2D drawing canvas, may be a grid or some markers to show the parts to customize. In any case, I believe a default texture switched to the dynamicTexture upon interaction would be the best. I actually think the plane could have an overlay (or better said, underlay) of Texture still showing the markings/grid. It could be activated or deactived through a button.
My opinion only and meanwhile, have a great week-end :sunglasses:

You are right, that was my bad.

I was thinking about doing that. Step by step I will get into it as well.
Thanks you for the advice.

@mawa @PatrickRyan
I did this very simple simulation in playground, it’s not perfect but it does the job for the beginning.

It seems that it is working as expected. I can add an image, manipulate with the chosen one and remove an image.
But since I need to do this in react I started a codesandbox quick project.
React codesandbox.
And I want to do the same thing that I did in playground. Save all new images, on change I remove all images, change a property of chosen image, and draw an image for each element.
That’s the same for removing an image, remove all images, remove the image from array and draw an image for each element.
But you could see that in react it is not working as expected. It seems that I do not get right array before drawing images.
What do you think about this approach?
Also by getting an answer on this problem I could easily finish the rest of the software.

@babyloner, it’s great to see your progress. Unfortunately, I don’t have experience with React so I won’t be able to debug your code. Maybe @carolhmj might have some ideas.

I’m afraid it’s the same for me. I don’t do react. Hopefully someone will kick in and know better.

That seems like an issue with your React state. State is something that can be tricky to get right, so you should try to get each step of the state manipulation working completely before moving into the next one.

2 Likes

@babyloner and @mawa, I just published a blog post about creating a customizer using dynamic textures including some information about setting up the material and the shader.

For the most part this is an expansion of what I spoke about above, yet the playground goes into much more detail about the actual creation and update of the dynamic texture. I hope this helps you understand the process better and are able to take away some tips for your project.

5 Likes

:slight_smile:

2 Likes

WoW! What can I say. This is simply amazing. Could nearly implement just like that and sell it to a client (how much is your take? :money_mouth_face: :stuck_out_tongue_winking_eye:). Awesome work. Bookmarked :smiley:

Edit: Forgot to mention that I absolutely love your idea of working it with layers. Not only does it offer a huge array of possibilities for customization/design but it also helps anticipate on (inevitable) rules that come as a requirement from the client (and then, in many cases, as a late requirement :sweat_smile:). This is good thinking. Having this already in the base makes for building on solid grounds. Of course, as always, my opinion only :smiley:

1 Like

@mawa, no take for me. The benefit of doing it was for the docs. That and testing some of the features used in the PG, of which we were able to identify and fix a few issues. And as always, please feel free to take this example and repurpose it for whatever you may be working on.

1 Like