which is added to the direction of my camera to give me a ray direction, which without it seems to really extend the distortion effect. Maybe I need a transform calculation too or something?
Note that there’s still some problems with the depth computation even with that change, because if you rotate the scene, you have a bad intersection between the two spheres (I moved the second sphere 1 unit below so that it intersects the first sphere):
It should be something like that instead (it’s the scene when first rendered, without moving/rotation):
The key point was to notice that all calculations are done in world space, whereas the depth read from the depth map against which we check our calculated depth is in screen space. So we have to project the world coordinate point we find through ray marching to screen space to be able to extract the right z value to compare with.
What I did:
removed references to camMinZ and camMaxZ from rayMarch, because calculations are done in world space whereas camMinZ / camMaxZ are data given in view space.
modified render to calculate the right z value for comparison with depth in the main function
modified applyFog to take the distance value “as is” (dist is now already between 0 and 1)
I was really close with my script too, just was not making the correct shifts to accommodate for the cliping! You have opened my eyes to which matrix is which now though.