Onion Skin Post Process?

Maybe I can add a new subclass to EffectRenderer to handle the rendering of a single effect with a buffer ping/pong, as it can be a common use-case?

Something like:

export class EffectRendererPingPong extends EffectRenderer {

    private _outputTexture: Nullable<Texture>;

    public render(effectWrappers: EffectWrapper): Nullable<Texture> {
        this._outputTexture = this._getNextFrameBuffer(true);
        super.render(effectWrappers, this._outputTexture);
        return this._outputTexture;
    }

    public applyEffectWrapper(effectWrapper: EffectWrapper): void {
        this.engine.enableEffect(effectWrapper.effect);
        this.bindBuffers(effectWrapper.effect);
        effectWrapper.onApplyObservable.notifyObservers({});
        effectWrapper.effect.setTexture("textureSampler", this._getNextFrameBuffer(false));
    }

    public get outputTexture(): Nullable<Texture> {
        return this._outputTexture;
    }
}

With this class, the core of the PG simplifies to:

        eWrapper.onApplyObservable.add(() => {
            eWrapper.effect.setTexture("videoTexture", videoTexture);
        });

        scene.onBeforeRenderObservable.add(() => {
            videoMat.diffuseTexture = eRenderer.render(eWrapper);
        });

Maybe even better: just update the EffectRenderer class to only use the internal textures for the rendering and have the ability to retrieve the latest texture used, meaning the output texture.

The benefit over EffectRendererPingPong would be that we could keep an array of effects for the call to render.

I have done it locally, it does work well.

Interesting. I can definitely see how a Post Effect class with easy access to the previously-drawn frame could have a lot of uses. I notice that @Evgeni_Popov 's Playground only runs in the current BJS beta, in WebGL 2 browsers, which unfortunately may rule the method out for this particular project of mine. (iPhone is a primary target.)

With a bit more searching, I also found this thread by @Pryme8 on the old forum, doing Game of Life completely in the GPU. I’m still absorbing what he did, but I’ll share if I get something working: Transform feedback buffer - Questions & Answers - HTML5 Game Devs Forum

1 Like

The PG should work in WebGL 1, there’s nothing specific to WebGL 2.

Going to make a PR to have a basis for discussion.

Here:

I’ve been working on finding better methods for this for days. It has really been struggling trying to get it to work with a simpler setup.

Ideally you would just have a procedural texture or something and you could bind its internal texture onto itself like @Evgeni_Popov is right on point with his PR.

That’s what this whole post was about:

In fact you can easily do it without the PR like this:

https://playground.babylonjs.com/#HTGWIN#3

1 Like

Thank god, dude I have been beating my head against this problem for months off and on in my free time.

This just opened up so many doors for me. I need to buy you a beer you keep saving me.

Now I need to see if this can be applied to do complex multistep simulations.

I will take a fruit juice instead, I don’t like alcohol :wink:

1 Like

“EffectWrapper” I had no clue we had something like this…

@Evgeni_Popov

How would you chain this?

So like have step1 pass to step2 pass to step3 and then maybe pass step3 back to step1?

You can simply add other EffectWrappers to the list passed to render():

https://playground.babylonjs.com/#HTGWIN#4

Internally, EffectWrapper passes the previous buffer through the sampler textureSampler.

Note however that because we pass more than one EffectWrapper, the EffectRenderer will create two internal textures for the ping pong.

As you must already create two textures on your side, it could be better for resources sake to only use those textures, which is possible by calling Render two times in a row, with a single wrapper each time:

https://playground.babylonjs.com/#HTGWIN#5

This is huge dude now I can do convection simulations, water flow, etc…

I have been trying to get someone to answer this question for a long long time now… and was starting to kinda get sad about not being able to figure it out easy. Plus anytime I asked any questions on it nobody really knew how to help. THANK YOU!

Np, I may have read your questions but without really understanding what you wanted to do, or I didn’t know about EffectRenderer at that time (I know it exists for a few time only).

1 Like

I’m afraid I’m not following what the EffectWrapper class is doing here. I tried adapting the example to the topic goal (a fullscreen effect, rather than a video texture) but I’m getting lost: https://playground.babylonjs.com/#ZLB7Z2#1

You don’t need two effect wrappers as you have a single effect to use.

Also, you use samplers that you never set (textureSampler), so it can’t work.

Try this:
https://playground.babylonjs.com/#ZLB7Z2#3

How it works:

  • first frame: rttA is the texture red from (in the shader, sampler lastFrame), rttB is the texture written to
  • second frame: rttB is the texture red from (in the shader, sampler lastFrame), rttA is the texture written to
  • and so on

So, at each frame, the previous output (written) texture is used as the source texture, and the previous source texture becomes the output texture.

For this to work, however, you have to put something in the very first texture red from (rttA)…

In the PG, I make a rendering of the current scene, so it’s a read cube at the center of the screen.

Then, the shader simply samples the source texture and applies a coordinate scaling of 0.998 so that you can actually see some changes: the effect is a kind of a blurring toward the upper right corner of the screen.

Note also that it is working because the source / output textures are never cleared! So, the frames get accumulated one over the other, with a tiny shift each time (because of the 0.998 factor).

Thanks, Evgeni, but I must not be communicating the topic goal very effectively. If we remove the scaling factor from your Playground example, we simply get a frozen cube, rather than a blur of the spinning cube. What I’m trying to accomplish is something like this. (YouTube link)

None of my experiments are working, but in BJS–as best I can tell–we can’t simply blend the current pixels we’re drawing with what we drew for the last frame. It would have to be done like so:

  1. Render scene to Texture A.
  2. Multiply Texture B by 1 - OpacityFactor.
  3. Multiply TextureA by OpacityFactor.
  4. Add TextureA to TextureB.
  5. Display TextureB onscreen in place of the rendered scene, either with a custom PostEffect or some built-in method I’m not familiar with.
  6. Repeat for the next frame…

Ok, I understand better what you want to do.

Try this:

https://playground.babylonjs.com/#ZLB7Z2#4

Instead of passing the video texture as in the first PG, I pass a RTT in which the scene has been drawn into.

I can be a bit blocky, because I used 512x512 textures.

NOW I get it! That is amazing

Here’s a documented version of the Onion Skin Effect playground, for future explorers: https://playground.babylonjs.com/#ZLB7Z2#6

3 Likes