As far as I know, it’s not possible to simulate lens distortion with a single projection matrix, as projection is not linear in this case.
Note that you should use onProjectionMatrixChangedObservable
because you’re updating the projection matrix.
As far as I know, it’s not possible to simulate lens distortion with a single projection matrix, as projection is not linear in this case.
Note that you should use onProjectionMatrixChangedObservable
because you’re updating the projection matrix.
Thanks for the hint, but I think that this observable does not exit. I am going though the details of calculating projection matrices.
I dont really understand are some of the entries: e.g. _m[0]
would be expected to depend on the near clipping plane and _m[14]
is expected to be -1. Also m[10] has a different sign than expected.
To achieve what I need, it seems .to me that one would need to replace the division by w=z
with “w=(z-z0)” where z0 is the focal point of the lens (if viewed at infinite distance) or more precisely the image of the pupil of the observer looking at the lens.
The Z-projection should be modified by
_m[10]=A=-(n+f-2z0)/(f-n) and
_m[14]=B=-(2nf-z0(n+f))/(f-n),
if I did not make a calculation error. But this does not work
BABYLON does not use the coordinate system as given in the above video, but seems to be (naturally) OpenGL-based. Yet
shows that _m[14] should be -1, but for some reason it seems to be one.
Is BABYLON using a different convention than OpenGL?
This is where it gets constructed, which shows that n does not appear in the x and y entries in the matrix:
It does. Just replace onViewMatrixChangedObservable
with onProjectionMatrixChangedObservable
in your PG and you will see your observer is still executed.
Here’s the source code of the method that calculates the projection matrix in left-handed mode (default), if it can help:
I tried and do not see any updates:
Also the code is the same, which I referred to above, right? But thanks for finding it as well.
So why is there this significant discrepancy to standard OpenGL Projection matrices?
I didn’t say it would solve your problems, I said it existed and it was better to use it rather than onViewMatrixChangedObservable
if you want to update the projection matrix manually
So why is there this significant discrepancy to standard OpenGL Projection matrices?
Babylon.js uses a left-handed system by default, so we generate a projection matrix accordingly. If you set the scene to right-handed mode, we use this other code:
I meant so say: There is not calls to this callback, if you pan and move around in the scene (Maybe this does not change the projection matrix?). You can try it in the playground above and you see no log prints whatsoever.
As for the matrix, I think this is not just the difference in handedness. Changing the handedness may admittedly change some signs, but it does not explain the difference in _m[0] and _m[5] which are supposed to depend on the near clipping plane but they don’t.
In fact, this callback is only called when the projection matrix changes (i.e., if you change the camera’s fov, near/far planes, etc.). But if you simply move the camera, the callback will not be triggered.
The mathematics are not the same because we use a fov + aspect ratio whereas in some references they define the projection plane with some right,left,top,bottom coordinates.
For example, OpenGL Projection Matrix defines the matrix the same way we do (except maybe for a few signs here and there):
So it’s really the same thing.
Thanks. This explains both of it. I am still stuck with this problem but made some small advance.
The trouble is that (Az+B)/(z-focus) flips the z-space around the focus inside-out, which means that the other side of the cipping planes matter now (i.e. there are two clipping planes very close to the focus point and everything outside shoudl be shown. Yet for some reason, which I don’t understand, only the part after the focus is shown. Maybe deep in the internals there is something that prevents negative values of w, even if the z-values would end up in the clipping range, which is -1 to 1?
There probably is not really clean solution here, since the focus in the middle does not allow one to get a properly ordered z-buffer. Maybe one can somehow instead use two cameras, one for the area before the focus and one after. The one before would need a strange projection, which shows things bigger the further they are away from the camera but the closer they are to the focus. This may cause the same issues with negative w values?
This actually seems to work:
_m[15] = -focus
and
_m[11]=-1
the right effect appear. When you move away from the object, it becomes larger:
This works like expected including the clipping planes. But it is super important that the far plane is before the focus.
Now there is only one remaining problem: How to I force the renderer to render twice?
Once with another one for after-focus (still to be contructed, but that should be OK) and once with this before-focus projection matrix? The images should be rendered on top of each other with the before focus overwriting the after-focus one.
Any ideas? Note that the two matrices cannot be linearly joined, since the normalization with w is different!
You can use two cameras and place them in scene.activeCameras
. This will render the scene from both cameras one after the other, and the rendering of the second camera will overwrite the rendering of the first.
Thanks! Sounds great. I wil try this. Does a similar scheme also exist for rendering textures?
After all this is to provide proper “lens” textures for glass materials.
You can render into a specific texture thanks to the RenderTargetTexture
class and then use the texture for whatever you want afterwards. Note that you can also set a texture to camera.outputRenderTarget
to generate the regular scene in a custom texture.
Thanks for this information. I finished the “focusing camera”, well rather two cameras with modified matrices, which generate the lens effect authentically:
The left viewport renders the scene ordinarily and the right viewport uses the “focussing camera” which renders the scene as seen through a lens. Playing with this, don’t be disturbed by occasionally appearing black areas, this is not a bug, but actually correct, if the near-plane is too big, the ground is viewed partly from below, where it does not see light.
However, the next task would be to somehow marry this with the refractionTexture.
I looked at the RenderTargetTexture
for a bit, but I could not find the supposedly six cameras which are responsible for rendering the CubeTexture
. Even the RenderTargetTexture.activeCamera
just seems to be empty.
Another thing, which I could not quite work out is how surface normals and relative positions to the local origin of a mesh are treated in the reflection and refraction textures. The documentation does not seem to cover this information.
Of course this knowledge is somewhat essential in trying to get a useful mapping to the texture.
E.g. for a lens, I would think that two textures (front and back) should be covered the above-mentioned “focusing camera”, well, each of them really needs the combination of two such cameras as in the example. For a glass-ball you would need 6 such “focussing cam”.
However, all of these cams should behave differently than it is currently done. As far as I saw, the orientations of the cams and the generated CubeTexture seem to be not rotating with the mesh. In the case of the lenses, it would be best, if they do rotate with the mesh. For a perfectly round glass-ball, it actually does not matter, but as soon as it is elyptical it does.
As for a future user interface to these features, it would be cool to just specify a possibly focal distance for each of the six directions. If zero is supplied than ordinary cams are used, otherwise the “focusing camera” is used for that CubeTexture plane.
What do you think about this suggestion?
RenderTargetTexture is used to render to a 2D texture. If RenderTargetTexture.activeCamera
is left empty, the currently active camera of the scene will be used to render the texture.
If you want to generate a cube texture, you can have a look at the ReflectionProbe class (but I can see in the thread that you already evaluated that option).
I’m not sure to understand that one, reflection and refraction textures are 2D or cube textures, there are no normals / positions embedded in them. If you want to understand how they are used in the shader calculation, the best way is to look at the standard material (in Shaders/default.fragment.fx
) or PBR shaders (in Shaders/ShadersInclude/pbrBlockReflection.fx
and pbrBlockSubSurface.fx
).
Regarding the refraction effect, it’s on our todo list to evaluate the possibility to use screen space refraction, in the same way we already support screen space reflection.
getCubeTexture()
Of the ReflectionProbe actually returns just a RenderTargetTexture according to the docs.
I have no idea how to attach 6 cameras to the faces of a cube, each one responsible for rendering one side.
As for the question with the normals, I don’t quite understand how a cube texture is mapped to a 3d surface and (probably wrongly) assumed that the positions and normals point to the cube position and thus decide which Texel to use.
But if the normal are not relevant I understand even less how this is done. A mesh triangle supposedly has 6 orthogonal projection positions on each face. Is it projected radially to determine the uv position?
Yes, silly me, I mixed things up! ReflectionProbe
is our class which is generating a cube texture by rendering 6 different views, but the underlying class is still RenderTargetTexture
.
You should look at how it’s done in ReflectionProbe
, this should help you getting started:
It’s a vector that is used to lookup the cube texture (there’s a shader function that can do that, you only have to provide the vector). For the reflection (cube) texture, it is the direction of the reflection, and for the refraction the direction of the refraction.
Thanks. This is interesting. So there seems to be only the one active camera which is reused to render via this face index to different parts of the 2d texture.
It seems like it is not so easy to somehow specify different cameras, one for each cube-mesh surface.
But at least using one camera should relatively easy.
But this means, I would have to pack my two Lens-cams into the interface of a single camera.
Is there already a framework, how to make sequential rendering by multiple cams look like a single camera?
As for the reflection vector. Maybe I did not write this clearly enough. My understanding difficulty is more about how any texture is mapped to the vertices of a mesh. Particularly if this is a cube texture. Is this done via a radial projection from (0,0,0) via the vertex to the uv coordinates?
But, with respect to your single reflection vector, a reflection probe should also (be able to) react to the local normals, I would think, if one also wants to mimic close to realistic reflections.
No, we don’t have that in the framework, that’s why the source code of the reflection probe could help.
If you build your own class, you don’t have to use a single camera, though, nothing stop you to use multiple cameras and use the right one depending on the face that is currently rendered.
The mapping coordinates are calculated depending on the texture.coordinatesMode
value:
In fact, it reacts to the normal because the reflection (and refraction) directions depend on the normal.
… still on the challenge to get a realistic “lens-probe” I made some progress:
On the top a camera class is implemented, which makes the two cams appear as one.
I also found out, how to attach such a lens-camera to the texture_rendering of the cube texture of a reflection-probe.
However, the reflection probe somehow looks into the wrong direction, which I tried to compensate by changing the view matrix. Yet for some reason, this does not seem to be used. There are no calls to onViewMatrixChangedObservable after some initial setup and in this texture probe this seems to have little effect.
Any ideas how to get this part right? I am trying to avoid writing a completely new probe class.
If this is solved one would only need to somehow update the focal length in dependence on the distance of the mesh to render and the rendering of a glass-ball should be doing roughly what it should.