Onion Skin Post Process?

Hey crew,

I feel like I’m overcomplicating this, so before I get down a rabbit hole: What would be the most straightforward way to combine multiple frames in Babylon.js?

I’m experimenting with a trailing/onion skin effect (I seem to recall parts of F.E.A.R. doing something similar back in the day) where the current frame is partially drawn onto the result of previous frames, e.g. (accumulatedPixelValue*.9)+(currentPixelValue*.1) The existing motion blur post effect from GPU Gems is cool, but not really what I’m looking at.

Any thoughts?

Hi, I’m just starting to learn shaders, so I can’t give a detailed answer. But what you described is very consistent with the opengl “feedback frame buffer” action, it stores information about the previous frame. But I do not know how it is implemented in babylon :confused:
Maybe it will help: Basic Feedback and Buffer Implementation with GLSL Shaders - Questions - three.js forum

I used the EffectRenderer to implement the effect you can see here: Shader - Shadertoy BETA


(click on the plane to pause/play the video)

I had to access a private member to achieve what I wanted to do, but I think the method could be made public => @Deltakosh or @sebavan ? It is EffectRenderer._getNextFrameBuffer

Also, I have overloaded this function to be able to pass the width/height to use when creating the internal textures (the current implementation is using the screen width/height): I think that’s something that can be back-ported to the class?

1 Like

I am worried about the index change being not understandable on public api s I like the idea but I wonder if it should be this one or a separate helper ???

Any preferences ?

Maybe I can add a new subclass to EffectRenderer to handle the rendering of a single effect with a buffer ping/pong, as it can be a common use-case?

Something like:

export class EffectRendererPingPong extends EffectRenderer {

    private _outputTexture: Nullable<Texture>;

    public render(effectWrappers: EffectWrapper): Nullable<Texture> {
        this._outputTexture = this._getNextFrameBuffer(true);
        super.render(effectWrappers, this._outputTexture);
        return this._outputTexture;

    public applyEffectWrapper(effectWrapper: EffectWrapper): void {
        effectWrapper.effect.setTexture("textureSampler", this._getNextFrameBuffer(false));

    public get outputTexture(): Nullable<Texture> {
        return this._outputTexture;

With this class, the core of the PG simplifies to:

        eWrapper.onApplyObservable.add(() => {
            eWrapper.effect.setTexture("videoTexture", videoTexture);

        scene.onBeforeRenderObservable.add(() => {
            videoMat.diffuseTexture = eRenderer.render(eWrapper);

Maybe even better: just update the EffectRenderer class to only use the internal textures for the rendering and have the ability to retrieve the latest texture used, meaning the output texture.

The benefit over EffectRendererPingPong would be that we could keep an array of effects for the call to render.

I have done it locally, it does work well.

Interesting. I can definitely see how a Post Effect class with easy access to the previously-drawn frame could have a lot of uses. I notice that @Evgeni_Popov 's Playground only runs in the current BJS beta, in WebGL 2 browsers, which unfortunately may rule the method out for this particular project of mine. (iPhone is a primary target.)

With a bit more searching, I also found this thread by @Pryme8 on the old forum, doing Game of Life completely in the GPU. I’m still absorbing what he did, but I’ll share if I get something working: Transform feedback buffer - Questions & Answers - HTML5 Game Devs Forum

1 Like

The PG should work in WebGL 1, there’s nothing specific to WebGL 2.

Going to make a PR to have a basis for discussion.


I’ve been working on finding better methods for this for days. It has really been struggling trying to get it to work with a simpler setup.

Ideally you would just have a procedural texture or something and you could bind its internal texture onto itself like @Evgeni_Popov is right on point with his PR.

That’s what this whole post was about:

In fact you can easily do it without the PR like this:


1 Like

Thank god, dude I have been beating my head against this problem for months off and on in my free time.

This just opened up so many doors for me. I need to buy you a beer you keep saving me.

Now I need to see if this can be applied to do complex multistep simulations.

I will take a fruit juice instead, I don’t like alcohol :wink:

1 Like

“EffectWrapper” I had no clue we had something like this…


How would you chain this?

So like have step1 pass to step2 pass to step3 and then maybe pass step3 back to step1?

You can simply add other EffectWrappers to the list passed to render():


Internally, EffectWrapper passes the previous buffer through the sampler textureSampler.

Note however that because we pass more than one EffectWrapper, the EffectRenderer will create two internal textures for the ping pong.

As you must already create two textures on your side, it could be better for resources sake to only use those textures, which is possible by calling Render two times in a row, with a single wrapper each time:


This is huge dude now I can do convection simulations, water flow, etc…

I have been trying to get someone to answer this question for a long long time now… and was starting to kinda get sad about not being able to figure it out easy. Plus anytime I asked any questions on it nobody really knew how to help. THANK YOU!

Np, I may have read your questions but without really understanding what you wanted to do, or I didn’t know about EffectRenderer at that time (I know it exists for a few time only).

1 Like

I’m afraid I’m not following what the EffectWrapper class is doing here. I tried adapting the example to the topic goal (a fullscreen effect, rather than a video texture) but I’m getting lost: https://playground.babylonjs.com/#ZLB7Z2#1