Parallax / heightmap encoding

Does anyone know how to encode the height from a heightmap into the alpha channel of the normal map for displacement

Yo @Deltakosh or @sebavan

Do we just average the RGB from the original heightmap image to the single alpha channel of the normal map ???

Example:

average grayscale = (image.r + imagr.g + image.b / 3);

OR

weighted luminosity grayscale = (image.r * 0.3F) + (image.g * 0.59F) + (image.b * 0.11F)

Then encode the grayscale to normal map alpha channel and set material.useParallax = true and material.parallaxScalBias = 0.02 (Min 0.005 - Max 0.08)

Am i on the right track ???

1 Like

Sounds about right to me !!!

I’m trying to add support for material heightmap to the toolkit so I need to be sure how encode a heightmap image into the alpha channel of the normal map … just like the normal maps being used in the following sample: BabylonJS - Parallax Mapping - Tutorialspoint

Does anyone know how the a2.png normals maps were generated ???

I think a lot of people are using Substance for that, so maybe you can have a look in this direction?

It can help too: Height map creation (see replies in this thread)

Also, to be sure, what you could try to do is to use Spector to analyze a frame rendered by Unity which is using parallax mapping, and see what is the height map texture that is send to the shader / the shader code itself.

I looked at substance to… But i cant tell what is being done to when they say use height from alpha channel. I was hoping to get a definitive answer as to how to actually pack the height into the normal channel using some kind of graphics api. So i can add support to the material export from unity should you have a height map specified on the material. I already use a shader in unity for exporting all texture images from unity image file formats to either PNG/KTX2.

i added the _HeightMap tex2D field that has the separate black and white image. Since i am just guessing at what it means to pack the height map color into the alpha channel… I am just taking an average of all the height map channels divided by 3 and using that value as the alpha for the normal map to use for parallax. Here is my Unity HLSL Shader code i use to generate Normal Maps for exported content.

sampler2D _BumpMap;
sampler2D _HeightMap;
float _GammaOut;

fixed4 frag (v2f i) : SV_Target
{
	float4 col = tex2D(_BumpMap, i.uv);
	float4 map = tex2D(_HeightMap, i.uv);
	// Encode average channel heights
	float height = (map.r + map.g + map.b / 3.0);
	// If a texture is marked as a normal map
	// the values are stored in the A and G channel.
	float4 result = float4(col.a, col.g, 1.0, height);
	return result;
}

As you can see i make the final Normal map color using the Unity A and G normal channels and the average height map color for result… But again… Im just guessing … i need to know if that right or not… and if not… what is the calculation to pack the height map color into the normal map alpha channel ???

In the unity I have on my computer (2017 version), in the shader source code there is:

float4 Parallax (float4 texcoords, half3 viewDir)
{
    half h = tex2D (_ParallaxMap, texcoords.xy).g;
    float2 offset = ParallaxOffset1Step (h, _Parallax, viewDir);
    return float4(texcoords.xy + offset, texcoords.zw + offset);
}

So I would say the height is only in the g component.

Regarding the normal, it seems Unity is using this function to unpack the data read from the texture:

half3 UnpackScaleNormalRGorAG(half4 packednormal, half bumpScale)
{
    #if defined(UNITY_NO_DXT5nm)
        half3 normal = packednormal.xyz * 2 - 1;
        #if (SHADER_TARGET >= 30)
            // SM2.0: instruction count limitation
            // SM2.0: normal scaler is not supported
            normal.xy *= bumpScale;
        #endif
        return normal;
    #else
        // This do the trick
        packednormal.x *= packednormal.w;

        half3 normal;
        normal.xy = (packednormal.xy * 2 - 1);
        #if (SHADER_TARGET >= 30)
            // SM2.0: instruction count limitation
            // SM2.0: normal scaler is not supported
            normal.xy *= bumpScale;
        #endif
        normal.z = sqrt(1.0 - saturate(dot(normal.xy, normal.xy)));
        return normal;
    #endif
}

Two cases:

  • UNITY_NO_DXT5nm is defined. Then you should simply do float4 result = float4(col.x, col.y, col.z, map.g); assuming Babylonjs and Unity have the same coordinate system
  • UNITY_NO_DXT5nm is not defined. You could do:
col.x *= col.w;

vec3 normal;
normal.xy = (col.xy * 2.0 - 1.0);
normal.z = sqrt(1.0 - saturate(dot(normal.xy, normal.xy)));

normal = normal * 0.5 + 0.5;

float4 result = float4(normal.x, normal.y, normal.z, map.g);

Babylonjs on its side is simply doing this to unpack the normal:

textureSample = textureSample * 2.0 - 1.0;

textureSample being the vec3 read from the bump texture.

I can see you are using the a and g components of the bump texture for the x and y component. I found a normal unpacking function in the Unity source code that matches this:

half3 UnpackScaleNormalDXT5nm(half4 packednormal, half bumpScale)
{
    half3 normal;
    normal.xy = (packednormal.wy * 2 - 1);
    #if (SHADER_TARGET >= 30)
        // SM2.0: instruction count limitation
        // SM2.0: normal scaler is not supported
        normal.xy *= bumpScale;
    #endif
    normal.z = sqrt(1.0 - saturate(dot(normal.xy, normal.xy)));
    return normal;
}

However I couldn’t see this function called by any code, whereas UnpackScaleNormalRGorAG was…

In any case, if it’s the right function for your case, you should still unpack the z component as it is done in the function and not fill it with 1.0.

Warning: the Unity shader code is probably different in the latest versions, you should look for parallax / UnpackScaleNormal in the shader source code and see what they do (I found the source code in Editor/Data/CGIncludes). Or you could use RenderDoc (I said Spector above but I don’t think Unity does rendering in WebGL?) to look at the shader code used when rendering a scene with parallax bump mapping.

Wow… now that’s what I’m talkin bout. Thanks bro.

So basically the green channel is the only thing used for height from the entire black and white heightmap image… that is good to know… thanks again for that clarification

Regarding the normal, from all my exporting experiments , the resulted normal image always looked and worked better in Babylon if I only take the A and G channels from normal texture… assuming texture was marked as a normal map

I will play around with the defines you talked about and try those calculations… But my normals looks ok before using only A and G channels … I just wanted to know how to pack the height that Babylon is going to ultimately use as parallax … your saying it’s just the green value from height map and not combining the channels in any way… the other channels are simply ignored from the heightmap usage … right

Again, thanks for helping and re explaining yourself. I just wanna make sure it’s working right before I release the toolkit For all others to use as gospel