I feel like I’m still not understanding the issue preventing this. What I know is
a) required functionality is in the standard shader and works with StandardMaterial and PBRMaterial
b) NodeMaterial builds upon existing shaders, e.g. PerturbNormal block activates BUMP define to do its work
I hope this is a correct assumption. In StandardMaterial and PBRMaterial parallax occlusion is activated by two defines: PARALLAX and PARALLAXOCCLUSION. There is a lot of uniform preparation and other stuff, but at its core, the functionality is in the shader, right? Why can’t it be re-used in NodeMaterial? You mention reading multiple times from the normal map, isn’t this part implemented in the shader? If not, why can’t it be replicated in NodeMaterial? And last but not least, why would implementing a PerturbNormalWithParallax node as a stop-gap solution be overkill if it would provide a hacky but working solution to an otherwise unsolvable problem?
I’m sorry for all the questions, but I’m trying to understand the problem while lacking much of the ground-work, which I’m picking up not nearly fast enough.
PS. We need this functionality and we are intending to provide a PR, but need to understand both the problem and reservations about implementation possibilities fully, to make it easier. Not looking for free lunches, Babylon.js is a huge free lunch already
Really cool to see this implemented! Is there an example on how to use the parallax offset?
I tried to add it to the UV before plugging it into the albedo texture. But the shader doesn’t compile anymore.
Here is a playground: https://playground.babylonjs.com/#NSG82E#3
There should be another cube that looks like the first. But it isn’t rendered because of the shader not compiling. If I connect the UV directly to the albedo texture it works. But of course no offset.
This is the error:
10:52:28: Shader compilation error: VERTEX SHADER ERROR: 0:195: ‘dFdx’ : no matching overloaded function found ERROR: 0:195: ‘=’ : dimension mismatch ERROR: 0:195: ‘=’ : cannot convert from ‘const mediump float’ to ‘highp 3-component vector of float’ ERROR: 0:196: ‘dFdy’ : no matching overloaded function found ERROR: 0:196: ‘=’ : dimension mismatch ERROR: 0:196: ‘=’ : cannot convert from ‘const mediump float’ to ‘highp 3-component vector of float’ ERROR: 0:197: ‘dFdx’ : no matching overloaded function found ERROR: 0:197: ‘=’ : dimension mismatch ERROR: 0:197: ‘=’ : cannot convert from ‘const mediump float’ to ‘highp 2-component vector of float’ ERROR: 0:198: ‘dFdy’ : no matching overloaded function found ERROR: 0:198: ‘=’ : dimension mismatch ERROR: 0:198: ‘=’ : cannot convert from ‘const mediump float’ to ‘highp 2-component vector of float’ ERROR: 0:269: ‘gl_FrontFacing’ : undeclared identifier ERROR: 0:269: ‘’ : boolean expression expected
You need to help the system a bit so that the parallax code is generated in the fragment shader and not in the vertex shader by setting the target of the Add block to Fragment:
I think I need some more help here. I find myself at the point again where I need to blend more than one normal. But as soon as I do all the parallax effect is gone. Did I miss something there or do the necessary texture lookups not work with blended normals?
Same happens when I try to calculate the parallax height node with an add node or something similar.
Here is an example where I tried to blend the normal with the same one scaled double. The normal itself blends but the parallax is gone. https://playground.babylonjs.com/#NSG82E#5
It can’t work because the PerturbNormal block is expecting that the normalMapColor input is coming from a texture block. That’s because it needs to sample the texture multiple times to achieve the parallax effect.
@Evgeni_Popov So what could be a possible solution, either short-term or long-term? You’ve mentioned in another thread that there should be only one PerturbNormal block and a BlendNormal block should be used for blending. In the current state of affairs this does not work with parallax occlusion (which is fine, since the other thread predates PO).
In the 4.1 times we’ve monkey-patched the shader locally to make this work so in theory this should be possible somehow, but I can’t say what would be a good and proper solution instead of a tacked-on hack.
I can’t think of another way to do it than by writing a custom shader and doing it by hand. The current code for the parallax mapping is using the texture() function in the shader to lookup different values in the normal map, but in your case you would need to change that to perform the normal blending computation instead.
If your normals are static, it would be far easier for you to simply generate an already blended normal map from two (or more) other normal maps as a pre-process.
I still found some other quirks that may need to be taken a look at.
Scaling and rotation of the textures doesn’t work at all if parallax occlusion is active.
If only parallax is used (everything connected but the switch for parallax occlusion off) then the depth part seems to scale and rotate but the normal does not. Also the albedo texture cannot be scaled or rotated anymore.
Given how the parallax thing has been implemented in the nme, you can’t rely on the those properties of the texture, you need to apply the transformations directly on the uvs: