Refraction of other objects

I am trying to get refraction to look realistic with also a few other objects being in the texture, but cant get it to work.

The trouble I am having is that the scene.environment seems to be ignored. The reflection and refraction with the probe is doing something with the red ball, but the refraction appears in a completely wrong place. You see it, if you look downwards rather than upwards through the sphere.

I also tried it without using PBR, but also was not really lucky for that case. The aim is the make it look like a glass material.

Hello @RainerHeintzmann, how are you doing?

The main problem is that you must add the hdrSkybox to the probe.renderList, once you do that you will be able to get the environment texture working with the probe.

I’m not sure why the red ball is showing up in the wrong place in the reflection. @Evgeni_Popov , maybe we have a bug in the ReflectionProbe implementation?

Here is a working example: zz | Babylon.js Playground (babylonjs.com)

1 Like

Great. Thanks! Its the refraction that is causing trouble. The reflection seems OK.
Is there maybe also a separate “RefractionProbe” to be used instead of the ReflectionProbe?

For the refraction to look better, you probably want to use local reflection cubes:

In addition, here are two PGs that use local cube maps with reflection and refraction:

If you want to see the impact of being “local”, comment the lines 36-37 in the PGs.

1 Like

Thanks, I will have a look at these playgrounds.
So it’s a bit tricky to get even reflections to look right. This playground also shows a problem in reflection, if you do not do such local maps:

As it turns out, it was the flag
glass.invertRefractionY = true; which fixes most of this.

However, there still is a strange “Magnification”, which I do not quite understand. You can see this in this playground but also the ones you (@Evgeni_Popov ) sent.
A glass box (refractive index 1.52) should not magnify at all. Neither in reflection nor in transmission. It should just shift depending on the parallax. Yet even the infinitely-away skybox is significantly magnified in transmission. Why? Is there a way to avoid this?

I understand that this magnification can be influenced by the choice of the bounding box. If I get this right, the PBR does not really try to model the physics but maps environment map simply for the reflected and transmitted rays.
If I wanted to emulate the effect of a lens the this seems really hard to achieve. I.e an environmental nap far away needs to be rendered to the lens surface I transmission by flipping it. Sonethi g at the focal distance needs to be magnified towards I fininty and below you get it upright.
Any ideas how to approximate such a behaviour?

We have the transmission helper which is used by the glTF extension KHR_materials_transmission which can do a better job with refraction, but it’s not available to users. This is because we’re not yet sure of the best way to handle refraction, so we’d rather not make things public that we’ll have to put up with forever if it’s not the right solution…

You can try searching for “transmission helper” in the forum, maybe something interesting will come up…

Also ask @PatrickRyan if he has any tips for simulating a lens effect.

@RainerHeintzmann its a bit hackey but you could just use a second camera and a render target texture to map in screen space to the object. You could control the size of the refraction in object by mapping the texture to a different size and invert it by inverting the V coordinate of the texture. You would simply cull the object doing the refraction from the render target texture.

Like I said, it’s hackey, but if you have very controlled circumstances it may look better.

Thanks for these ideas. I guess this would require to superimpose (with Z-buffering?) several textures each with a camera for each object (-distance)? Can this be done directly or would I need to construct a procedural texture for this task?
Anyway, I just saw, that my scene had a severe drop in frame-rate on the Pico 4, which may well be due to using PBR. In this case I will need to revert to non-PBR rendering anyway.

@RainerHeintzmann, without knowing what the final scene would look like, I am just guessing. But I think you might be able to use one camera to render the scene and share it between multiple meshes as a texture. If you start with screen UVs and then manipulate the UVs per mesh to create the distortion you want, then you would simply need one material and custom shader per mesh with refraction. However, if the meshes overlap and/or need to be rendered into the refraction of another mesh, then you would likely need to render multiple textures with the other translucent meshes in the RTT. However, the other translucent meshes likely won’t have accurate refraction. Again, without knowing what your final scene looks like, this is all a guess.

If you were to want a crystal ball on a table to refract the environment around it, it would be one solution. If you had a chandelier with hundreds of crystals hanging off it that need to refract accurately, it would be a different solution. However, in both cases, there would likely be some faking or trade-offs since we are talking about real-time rendering.

@PatrickRyan, thanks for the explanation. Lets think about one “lens” (or crystal-ball) first. Yes, but I guess you would also need one extra camera per such lens-mesh. But what bothers me are the properties of such a camera. I think a “UV-distortion” as suggested is not a good approximation to what a lens really does. E.g. parts of the scene viewed through the lens before its image plane are displayed upright and other parts, which stem from behind the (virtual) image plane, are inverted and the focal point itself would be crazily magnified.

It would need to have its focus quite removed from the actual mesh where one would need to render. I guess this could be achieved, by displacing the camera dynamically from the mesh. Yet it would then also need to be sensitive to parts of the scene, which are behind it. I.e. the clipping plane would need to be at the negative distance to the lens.
Is such a behaviour supported by current cam models? Can they also render things at negative distance? If not, one could maybe think of two cams and a distant clipping plane, but this sounds like quite some effort. And this would not work, since the z-buffer would be messed up,

If a cam with such (negative clipping plane) abilities exists, one would then need update it in its position depending on the viewing angle of the main camera to the lens-mesh. For stereo imaging, this would strictly speaking require two such cams, but one can possibly ignore this.
Going back to the multi-lens problem, one would then also need a separate cam for each combinatorial trace from the main cam via all lenses to the final lens to consider. But then having just one cam per lens-mesh should already give a pretty good approximation.

1 Like

@RaananW, do you know if the view matrix of a camera can be changed? Would this allow to have the focus of the camera also behind the screen rather than in front of the screen? Can the screen distance be set to a negative value?

The view matrix can be changed of course. Depending on what you need, really.

I am not sure about your question though. What do you expect to get, setting the screen distance to a negative value? What’s the use case here?

1 Like

See the slightly long explanation above. It is about emulating a camera where the viewing rays from the camera “screen” (or viewport) first converge, pass through a focus and then diverge.
Such a camera would be the ideal tool to compute the texture of a lens (or glass-ball).

How is it possible to change the view matrix by hand?

You have the camera.onViewMatrixChangedObservable observable which is triggered when the view matrix has been recomputed.

1 Like

Thanks, I am aware of this callback, but this is not what I was looking for. I need to either change the view-matrix manually or I need access to some other parameters like the camera-to-viewport distance or something like this. But then would be surprised, if a negative distance is not somewhere avoided in the code.

In an observer that you added to this observable, you can do camera.getViewMatrix() to get the view matrix. Then, you can do manual changes to this matrix.

2 Likes

Oh. Cool. I somehow thought that the matrix is returned by value and I was searching for setViewMatrix() or the implicit ways of accessing it. I will try this out in a playground.

It turns out, I was really looking for the camera.getProjectionMatrix() rather than the view matrix.
I played for a couple of hours with this:

The aim was to obtain a rendering by the camera, where the near (front) balls would be inverted (i.e. the red one on top) with respect to the far ones. This would emulate what a lens would do. However, it seems to me that there are some internal mechanisms, which still prevent the rendering.
I tried to move the camera between the two objects and set the near plane to negative values directly in the equations of the projection matrix hoping for the aforementioned result. However, it only shows stuff before me. Maybe the z-buffer has some extra zero clipping? Any ideas how to get this working?
Here is a video I found explaining the projection matrix.
To understand what I am talking about, here is a picture of what you see through a lens with a near and a far object. Notice how the far scenery is inverted but the near word “Babylon” is not.