Automatically generate texture maps from single image

Hello guys,

since this is not directly related to babylon I post it here as off topic.

Usually I use Substance Alchemist to create texture maps because it offers an equalizer that removes lights from the image, it handles the tiling to create seamless textures and there is an AI to convert the image to a material.

I am happy with this process for most situations but I have to create a huge amount of materials for a project. Therefore I would love to automate this steps to convert a single image to several texture maps (albedo, normal, roughness, ā€¦). I know that there is Substance Automation Toolkit but I wasnā€™t able to figure out if it fits my needs.

Do you have any experience with generating texture maps including adjusting the light, ā€¦? I am also open for different softwares.

Best

This certainly falls outside of my expertise, but @PatrickRyan might have a thought or two.

1 Like

@samevision, I havenā€™t used the Substance Automation Toolkit but it appears that it is only compatible with Designer. I couldnā€™t see anything in the docs that implied that Alchemist is scriptable either. There is definitely no easy solve for this issue, but I am not sure if an automated solve is possible at this point either. This is definitely a question for the Substance Forum. Wish I could have been more help.

Maybe you could try a little free piece of software called ā€œMaterializeā€ . Available here

Tutorials:

1
2

Only available for Windows 10 I believe.

Stay Safe All, gryff :slight_smile:

@PatrickRyan

Too bad that the automation toolkit doesnā€™t work for alchemist. Am I the only one who has to create pbr textures for a huge amount of materials :thinking:

@gryff

Thanks for the link. Do you know if itā€™s possible to use it via command line, python or something similar?

ā€”

Since the materials are just marbles I should be fine with normal and roughness map. I donā€˜t know if that makes it easier.

Sorry @samevision , Iā€™m not a coder. I just find tools I think might be useful and use them as packaged. Maybe contact the guy at GitHub - BoundingBoxSoftware/Materialize: Materialize is a program for converting images to materials for use in video games and whatnot.

If you have questions, maybe email to : support@boundingboxsoftware.com.

And there methods of automating some processes I believe. It does create normal and smoothness images

Stay Safe, gryff :slight_smile:

1 Like

@gryff

Good idea, I will contact him. Maybe itā€˜s possible with materialize.

@PatrickRyan

Do you have an idea why the roughness maps I receive from blender gltf export are orange/red instead of black/white? Does this make a difference in any way?

@samevision, Iā€™m happy to look into it, but I will need a repro otherwise I would just be guessing at what you are seeing

Substance and any other software generates a roughness map like this:

If use import and use it in blender and export it to gltf with separated textures it will be like this:

@samevision, what you are looking at is a channel packed texture. One of the acceptable formats is metal roughness packed in the R and G channels respectively. You can see the fully white Red channel means a metallic material and why you are seeing a primarily red image. The green channel, while showing a lot of detail is mostly smooth due to the values being close to black like your original texture. Had the material been very rough, your texture would look yellow-orange in a pack with light values in both red and green.

There is one problem that I see from the uploaded image, however, though it may have been due to the upload, but the image you shared is a jpg format file. The reason this is problematic with non-color data and especially with channel packed textures is that jpg block compression is subject to cross talk between the channels. In essence, the compression is looking at the composite of the R, G, and B channels to determine what values to write into the file, which means that each map is being changed based on data in the other channels. This will lead to artifacts in the individual maps that are not explainable since the file format assumed all three channels would be displayed together and not separated like channel packing would do.

To prevent this, you will want to use a PNG format for any non-color data files (roughness, metallic, normal, etc). If you are running into size problems with your files, you can use KTX compression using the UASTC mode which does compress without cross talk. The ETC1s mode will create a smaller file for KTX, but again is subject to cross talk in its compression, so is not a good candidate for non-color data.

2 Likes

Hello @PatrickRyan

Thank you for this detailed explanation. This is very helpful.

  1. It makes totally sense to avoid jpg if this format messes up the channel values. But this is just a problem for multi channel textures am I right? A normal roughness map like the black/white one should be fine in jpg?

  2. You said that since the R channel to completely white itā€™s a metallic material. But this is a marble material so itā€™s not metallic at all. I will check if I did something wrong in blender.

  3. So you can say itā€™s a metallic/roughness map in one file. It should be fine to place it in pbr.metallicTexture or do I have to do more?

  4. I wanted to take some time this weekend to learn more about gpu texture compression. Of course I want to make the file size as small as possible. What about .basis? Does is compress without cross talk?

Thank you!

Best

I was able to find a solution. As @PatrickRyan mentioned it is possible to automate Substance Designer with Automation Toolkit. So I tried to create the same effect in Designer I am using in Alchemist. That way way easier than expected because there are nodes to equalize lighting, remove seams, ā€¦

So you have to create a graph, export it as .sbsar and generate all texture maps with sbsrender.

1 Like

@samevision, the question about texture compression isnā€™t straight forward as there are many different considerations you need to entertain to choose the correct format. Through our internal testing of KTX, we have determined that for color data using the ETC1s gives the best combination of file size with an acceptable level of artifacts, mostly in areas of hard color transition. For non-color data UASTC does a good job of eliminating the artifacts you will see in ETC1s at the cost of a larger size in memory. But from our tests, using ETC1s for non-color data like a normal map produces very poor results so the smaller footprint in memory isnā€™t actually worth it.

Of the basis formats, PVRTC1 and ETC1/ETC2 will perform similarly to ETC1s in that you will have a lot of artifacts to deal with in non-color data. The remaining option is BCn which has a couple of options. BC5 is designed for normal maps, but only takes two channels as greyscale. They are compressed separately from one another, so you can use RG normal maps and calculate the B channel in your shader. The drawback here is that we have not implemented support for 2-channel normal textures in our standard materials yet as there has not been a lot of call for it. You can, however, create your own shader with node material to support this kind of normal texture format.

That format wouldnā€™t work for a three or four channel data-pack, however so there is only one other BCn format that may work. The BC7 (mode 6) option, while meant for HDRI compression, could be used for non-color data that needs more than 2 channels. The reason is that most of the BCn methods will use a single color space line per block to determine what colors are represented by each pixel as a position along the line defined by two end points in the block. This is problematic when there are hard shifts in color within the block, i.e. if you have more than 2 color hues represented within the block that donā€™t fit on the color space line, they will be changed to colors that appear on the line. This happens commonly in normal textures, so any format that only supports one color space line per block is a non-starter for normal textures.

BC7, however, supports multiple color space lines per block, so it can support widely changing color pixels within the same block. however, the drawback is that mode 6 of BC7 links the color channels and alpha channel so there will be some cross-talk, but the support of multiple color space lines per block will reduce some of the artifacts that will be present with cross talk. However, specifically for normal maps, BC5 compression of a 2 channel normal texture (or another 2-channel data packed texture) will be best as both channels are compressed separately. But you will need to create custom shaders to handle them.

There is one last consideration, however, and that is compatibility. BCn requires Direct3D 11 support and on the Mac is only available in devices that support Metal. The other Basis formats are native to most Apple products, but arenā€™t great for non-color data. So you really do need to look at several levers when choosing which compression scheme to implement and do a lot of testing on target devices to know what is best for your application. Hope this helps.

3 Likes

Well that explains a lot for me, and perhaps needs to escalate to the Exporter development team. The Babylon Exporter for 3dsMax has always exported our ORM textures as JPG. It may be the same for the Blender (and Maya) exporters with the JPG in question.

There is a distinct issue with the results that sounds aligned with your commentary on ā€œcross talkā€ and the channels get muddier and lose edge detail which effects quality of the visuals when applied.