Can somebody explain me how exactly the zOffset is added to the final depth? I had a look at the documentation but it’s not described and I also couldn’t find it in your code (basically stuck at NativeEngine::setZOffset()).
First I thought that the zOffset is defined in screenspace so I was expecting that it will either be added to the shader as an additional uniform which is added to the screenspace depth or that the projection matrix is being modified accordingly by multiplying a translation matrix with the zOffset as a z-translation onto it - but like I said I couldn’t find anything in the code. After some search I found in some other answer that it’s defined between zMin and zMax.
In order to achieve the rendering effect I’m seeking for I need to modify the screenspace depth. I can do this perfectly fine in my custom shader but I have other objects in the scene which I would like to render with the standardmaterial and there I saw that I could use the zOffset.
For this to be exactly the same like in my custom shader I need to understand how the zOffset is applied internally - so please help me out whoever knows how this works.
Thx for the answer! Looking at the documentation it would make more sense to call it with:
gl.polygonOffset(0, zOffset);
because otherwise the offset (which is then a factor depending on the angle of the polygon relative to the screen) would be 0 for polygons parallel to the camera image plane. Using the offset as a second parameter would give you a constant offset (which is probably the intention). But since I’ve never used this before I might also be wrong and not see the usecase for your implementation.
I think what we are doing is ok but we may also need to support the unit parameter. I don’t know why we chose to only support the factor value and not the unit one (maybe a (back) compat problem?).
Since I need to control exactly how large the offset is to fit the logic of my custom shader I solved it now another way: Per mesh projection matrix
The problem with the polygonOffset() would be that I would need to know the value of “r” which is described in the docs as r is the smallest value that is guaranteed to produce a resolvable offset for a given implementation. Might be that one can somehow query it but like I said, I solved it by modifying the projection matrix.
Chiming in here. I found this explanation of the difference between the two parameters helpful:
Based on this explanation of the difference between the two arguments, it seems like the unit is very important when applying decals, and factor is an additional nudge for heavily sloped polygons. The problem with only using factor (as Babylon currently does) is that if you are facing a polygon face-on, so that dz/dx and dz/dy are zero, then factor has no impact and you start seeing z-fighting.
It seems that the most appropriate implementation is that ‘units’ is used to indicate the ordering of decals, and and factor is set to ~1. When a polygon is sloped, the factor value will provide some meaningful ordering, which is why, I think, the current implementation works most of the time, but I am seeing errors that can only be addressed by nudge factors in the geometry transformations.
The current implementation goes all the way back to 2015; I’m a little surprised that issues with it haven’t been more common, but perhaps because there are workarounds available, no-one has thought to address this with a 2nd polygonOffset argument?
Or maybe I’m missing something! I’m a little new to 3D graphics programming. Anyway, hope this helps!
Hey there – I want to follow up on my own comment saying that, for reasons I don’t understand, I wasn’t able to use the 2nd (unit) argument to make decals work more reliably in my case. For my implementation I’ve kept small zOffset value in addition to a nudge in the geometry.
So simply exposing the 2nd polygonOffset parameter as a setting is unlikely to be a quick fix for these issues. I don’t know why that is, though.