Do custom render pass, then fetch the result back

Hey everyone!

I need to render a texture to screen space, provide a camera transform to project a second texture onto it like a decal, then get the result back to disk.

Has anyone done any of these steps before? The first thing I’ve been stuck on for a while is providing camera transform matrices to a post-process shader. Is this possible, like in a normal material? If it’s not already available, how do you get it in there?

Edit: it turns out the projection matrix part was easy.

It might just be my weak googling but I haven’t come across a way to get a render target texture data back out of the GPU. Is this possible?

BaseTexture provides a way to read back pixel : BaseTexture | Babylon.js Documentation

As RenderTarget inherits from this class, I suppose it’s possible to read back texels from a renderTarget.

Can you confirm @Evgeni_Popov ?

Indeed, simply use readPixel of the RTT to get the pixel data.

1 Like

I got my prototype set up. I’m able to project a decal on to a model, then in another shader unwrap the model to screen. The decal continues to apply to the right faces.

Then I capture a screenshot using babylon. The bottom texture and UVs look correct, but the projected texture looks nothing like what I’m seeing.

My guess is that during screenshot capture a matrix I use isn’t passed to the renderer so the screenshot breaks. I guess I have to do it the render texture way.

I’ve not been able to work out how to get the texture back out of a PassPostProcess(). This is what I tried and it probably isn’t the right way to get the texture:

    this.LayerBakeShader = new BABYLON.PassPostProcess("DecalBake", 1.0, Globals.Camera);
    this.LayerBakeShader.getEffect()._bindTexture("textureSampler", this.RenderTargetTexture);

How should I be making a post-process shader write to a texture?

Could you create a repro with the issue you are seeing in the playground ? It is a bit hard to troubleshoot without a repro.

Not really, I just want to know how to get the texture from a post-process result. If that doesn’t work then I’ll spend a day putting it in the playground.

Note sure if it applies in your case, but you can use camera.outputRenderTarget to render what a camera is seing to a target texture:

That’s even better, thank you!

If you simply need to render a post process to a texture, you can use TextureTools.ApplyPostProcess

Looks like I have the same problem with getting the camera output directly. It’s an unusual one. I can see in the inspector that the captured texture does not match what was on screen though. I’ll do a video just to demonstrate it.

I guess I’m going to have to rebuild it in the playground.

Are you using 5.0? camera.outputRenderTarget does not work correctly in 4.2

1 Like

Ah we are probably on the earlier version. Is that a big change if we are on 4.2?

The only thing I haven’t tried yet is getting the texture from the post-process output. How do you do that?

There has been quite some changes/rewrites in Scene.render and Scene.processSubCamera I think, so I don’t think it would be straightforward to port it to 4.2.

The final postprocess output is the default framebuffer, so you can’t get it except by taking a screenshot.

The CreateScreenshot method is simply reading the rendering canvas so you should get exactly what you see on screen.

My problem must lie somewhere else then. I’ll revert to the screenshot method and keep checking. The evidence isn’t on my side though.

I actually meant is to easy to migrate the project entirely from 4.2 to 5.0? Is there a migration guide anywhere?

It’s easy to migrate from one version to another because we strive to be backward compatible. However, in 5.0 we have more breaking changes than usual because of WebGPU support. Have a look at the “Breaking changes” section of the what’s new file: