I would like to understand the precision of the position and normal textures in the G-buffer. In pg, my camera’s max Z is 20000 and min Z is 0.1. I’m using the WebGPUEngine and have inverted the depth buffer. Rendering the scene with post-processing by reading the G-buffer position and normal textures produces fine results. However, in my project, as the number of meshes increases, the rendered scene with post-processing shows severe aliasing when reading the G-buffer position and normal textures. I want to know if the precision of the G-buffer textures is related to the number of meshes in the scene.
I don’t know if it’s related to your issue, but just in case : it’s not recommanded to have high maxZ at the same time as low minZ. The Depth Buffer precision is highly affected by :
- the ratio of zFar to zNear
- How much the zFar clipping plane is high
- How close is an object is from the zNear clipping plane
Try to keep minZ as high as possible, maxZ at low as possible. If some objects are seen at a distance of 0.1
, it’s not likely that you need to display objects at distance 20 000
. For example if it’s a skybox then you better set it to infinite (skybox.infiniteDistance = true
) and parent its position to the camera so that it’s infinite.
Thank you for your reminder, but I am not using the depth texture in the post-processing shader. I am using the normal texture and position texture from the G-buffer. However, from the images, it appears that the precision of these two textures in my project is very low. But the rendered precision in the playground is acceptable.
I added some meshes in the playground, but the post-processed rendering did not change. It is possible that some configuration in my project is compromising the precision of the G-buffer textures.
just like it? in 113 line.
I am wondering if the precision loss is due to the camera frustum being too large or having too many meshes in the scene.
I think I found the problem, It is caused by the camera coordinate offset being too large or negative. but I don’t know how to fix it.
other spot light radius image:
This could help Floating Origin Template for BJS 5.x
I would like to understand why this happens. Is it caused by negative coordinates? Or is it because the texture data type is unsigned float?
It is because moving too far from the origin creating precision issues on the depth buffer
Besides using a floating origin camera, are there any other solutions to address this issue? My scene contains a large number of meshes (around 300), and I am concerned about potential performance issues if the meshes are moved frequently with a floating origin camera.
Why do you have backfaceCulling = false
for the tunnel and the road and why are those meshes created as doublesided? For me it looks like the opposite faces of the meshes are z-figthing.
These are not the key issues. As shown in the picture, it should be a spotlight effect under normal circumstances, but for some reason, there are large jagged edges. This shouldn’t be a z-fighting issue either.
The model in this picture was exported from Blender. The model origin is at the zero point, but the vertex coordinates are significantly offset, around -3000.
What are you trying to express?
in babylon
in blender
Obviously, if the coordinates are exported from the origin, this problem wouldn’t exist.
However, when I move the camera to the end of the road, the problem appears.
In babylon
in blender
This is all what I can see.
Can you isolate in a playground the spot problem with a large mesh only ? it would be easier for anybody who is willing to tackle this issue.