Refraction of other objects

… and there is something else which is odd:

Normally the focus change has severe effects on the rendering, but for strange reasons, a change in the clipping planes seems to be all. As if the camera is not really used, or rather not really updated correctly. Do I need to do some sort of “invalidation” to force a full re-rendering?

You have to attach the controls to the camera, so that it moves/rotates according to the inputs. Also, you will have to call camera.update() yourself because the camera is not added to the scene.activeCameras array:

    lens_cam.attachControl();
    
    scene.onBeforeRenderObservable.add(() => {
        lens_cam.update();
    });

Yes, the view and projection matrices are recomputed by the ReflectionProbe class. Only the minZ/maxZ properties of the currently active camera are used in the calculation.

If you want to overwrite the calculation, you will have to remove the observer from refractionTexture._renderTargetTexture.onBeforeRenderObservable and add your own. The implementation in ReflectionProbe is:

I am a stuck. The texture is updating but for some odd reason, all 6 cube faces only render with the same (last) view, even though the individual directions seem to be calculated correctly.

the overloaded update() function seems never to be called in my class and only the front camera seems to render when used in the texture rendering mode.

Using camera in the normal scene rendering mode, both cams seem to work OK.
Any ideas, how to fix this?

The LookAtLHToRef function will create a view matrix and store it in the reference you pass, so in your case your lens_cam.camera_behind.getViewMatrix() matrix.

I think you want to create a specific view matrix instead, one for each of the cube direction?

In any case, you must call scene.setTransformMatrix(viewMatrix, projectionMatrix); and pass the view and projection matrices you want to use for the current rendering.

Thanks. This helped a bit:

I am a bit confused, why the scene transformation matrix needs to be changed.
The camera(s) I am using is not really the scene main camera, or in fact any of the cameras used for rendering the scne, which probably is part of the whole problem.
Anyway.
It is still unclear to me, which function is called behind the scenes to do the rendering.
It does not seem to call “update()”, or at least not the update of my lens-camera.
Maybe I have to somehow trigger the update of the behind camera myself and let the scene-camera with a changed view and projection matrix do the second part?

The problem is that the line:
refractionTexture._renderTargetTexture.activeCamera = lens_cam;
has no effect what so ever. Somehow this seems not to be the way to tell the system that it should use this camera for rendering.
It always seems to (mis-)use the scene camera.

That’s because the view and projection matrices used by the shader code are the ones set by a setTransformMatrix call. In the normal scene rendering, this call is done with the active camera view and projection matrices. When a RenderTargetTexture is doing its own rendering, it is calling setTransformMatrix with the view / projection matrices coming from the camera you set to RenderTargetTexture.activeCamera if any, else it’s taking the currently active camera:

const camera: Nullable<Camera> = this.activeCamera ?? scene.activeCamera;
const sceneCamera = scene.activeCamera;

if (camera) {
    if (camera !== scene.activeCamera) {
        scene.setTransformMatrix(camera.getViewMatrix(), camera.getProjectionMatrix(true));
        scene.activeCamera = camera;
    }
    engine.setViewport(camera.rigParent ? camera.rigParent.viewport : camera.viewport, this.getRenderWidth(), this.getRenderHeight());
}

It is RenderTargetTexture.render which is called to render into the texture.

As I said in a post above, because your camera is not added to the list of active cameras, you have to call the update method yourself:

    scene.onBeforeRenderObservable.add(() => {
        lens_cam.update();
    });

Thanks for the explanations. Getting closer now. So somehow the scene hosts a scene-global transformation matrix, which is always used by each shader and needs to be overwritten sequential even if rendering for a local texture, if I get this right.
I tried the lens_cam.update() in various arrangements, but this does not help at all, which is why I did not end up putting it into the code. But now it is in it, even though it seems to not have any effect.

I think the problem really is that my “camera” requires a two-pass rendering with TWO individual (different) transformation matrices, since it really hosts two cameras. This works OK in the scene environment, but not so in the texture environment. If you look at this playground, it seems like I am getting close to the wanted result by explicitly calling “render()”:

Another “render()” call is probably happening implicitely for the camera behind the scenes, which uses the current TransformationMatrix.
Yet the trouble is that one of the render calls somehow has the screen as the target rather than the texture.
But even if this works, it really does not look like a clean solution to me. Maybe one should overwrite the render() function for this TargetTexture?
Any ideas?

One thing that may help you understand what’s going on with rendering is the use of Spector.

This way, you’ll see exactly what is rendered and with what parameters.

I installed Spector and looked a bit at the OpenGL commands it captured, but this is like a whole new world and the above problems are more to do with Babylon than OpenGL I guess.
On the screen (or texture) I can also see the renderings.
But it is an interesting tool anyway.

However, it does show, that one cam seems to render to the TEXTURE_CUBE_MAPs
and the other one to CANVAS.
Maybe I just have to find out, how to redirect it also to the specific TEXTURE_CUBE_MAP face from within onBeforeRenderObservable.

I found a solution to that problem, which is overloading renderTargetTexture.customRenderFunction:

This circumvents the issue that only one camera can currently be used on texture renderings.
There is still an update problem wrt the mesh position and render origin, but some further debugging should be able to solve this. So far the update() calls still have no effect. I would expect that ideally a re-rendering of the texture should only be required when the mesh or other meshes in the render list are moved or modified.

There is also a discongruency problem at the cube-edges, which I suspect to be related to the focus position changing. I will have to look into this as well.

No, the rendering is performed each frame (depending on RenderTargetTexture.refreshRate, which is REFRESHRATE_RENDER_ONEVERYFRAME by default). If you want to render only at specific times, you should set refreshRate to REFRESHRATE_RENDER_ONCE and call the render function yourself.

With the rendering, I understand, but would this not be what one wants for a reflection of refraction texture, re-rendering only when really needed, because something changed in the scene other than the observer viewpoint? Yet I can see that this invalidation condition would be somewhat hard to determine.

Going deeper into this, it turns out that the CubeTexture seems really not the right thing to use.
Is there an example, rendering to a single plane, which is always oriented perpendicular to the axis towards the observing (scene-)camera?
This texture would then also need to be mapped to the mesh via a different algorithm, e.g. by parallel projection or by some other projection.

I’m not sure it’s what you are after, but for the glTF transmission extension, we are using a TransmissionHelper class, which renders the opaque objects into a texture and use that texture as the refraction texture for transmissive materials (but that class is not exposed to the end user).

Look for TransmissionHelper in the forum for more context.

I think I will stick for now, with what I have and use a cube texture, but one without the normals.
The “final” result can be seen here.

Note that the texture is not really continuous around the edges of the cube, as there are really 6 distinct foci. But on the other hand one needs to look closely to notice.

So this is as close as I could come to the effect a glass-ball really has on the refractions.
Well, I guess one thing could still be added, which is to make the focal distances dependent on the distance of the observing camera to the mesh with the texture.
Is there an easy way to get access to that distance from within the texture class?

If you want a single distance camera ↔ mesh, you can simply calculate the distance between the camera position and the center of the bounding box of the mesh. If you want to calculate a distance between the camera position and each of the mesh point, then it can only be done in a shader.