I feel like I’m overcomplicating this, so before I get down a rabbit hole: What would be the most straightforward way to combine multiple frames in Babylon.js?
I’m experimenting with a trailing/onion skin effect (I seem to recall parts of F.E.A.R. doing something similar back in the day) where the current frame is partially drawn onto the result of previous frames, e.g. (accumulatedPixelValue*.9)+(currentPixelValue*.1) The existing motion blur post effect from GPU Gems is cool, but not really what I’m looking at.
Hi, I’m just starting to learn shaders, so I can’t give a detailed answer. But what you described is very consistent with the opengl “feedback frame buffer” action, it stores information about the previous frame. But I do not know how it is implemented in babylon
Maybe it will help: Basic Feedback and Buffer Implementation with GLSL Shaders - Questions - three.js forum
I had to access a private member to achieve what I wanted to do, but I think the method could be made public => @Deltakosh or @sebavan ? It is EffectRenderer._getNextFrameBuffer
Also, I have overloaded this function to be able to pass the width/height to use when creating the internal textures (the current implementation is using the screen width/height): I think that’s something that can be back-ported to the class?
I am worried about the index change being not understandable on public api s I like the idea but I wonder if it should be this one or a separate helper ???
Maybe even better: just update the EffectRenderer class to only use the internal textures for the rendering and have the ability to retrieve the latest texture used, meaning the output texture.
The benefit over EffectRendererPingPong would be that we could keep an array of effects for the call to render.
Interesting. I can definitely see how a Post Effect class with easy access to the previously-drawn frame could have a lot of uses. I notice that @Evgeni_Popov 's Playground only runs in the current BJS beta, in WebGL 2 browsers, which unfortunately may rule the method out for this particular project of mine. (iPhone is a primary target.)
I’ve been working on finding better methods for this for days. It has really been struggling trying to get it to work with a simpler setup.
Ideally you would just have a procedural texture or something and you could bind its internal texture onto itself like @Evgeni_Popov is right on point with his PR.
Note however that because we pass more than one EffectWrapper, the EffectRenderer will create two internal textures for the ping pong.
As you must already create two textures on your side, it could be better for resources sake to only use those textures, which is possible by calling Render two times in a row, with a single wrapper each time:
This is huge dude now I can do convection simulations, water flow, etc…
I have been trying to get someone to answer this question for a long long time now… and was starting to kinda get sad about not being able to figure it out easy. Plus anytime I asked any questions on it nobody really knew how to help. THANK YOU!
Np, I may have read your questions but without really understanding what you wanted to do, or I didn’t know about EffectRenderer at that time (I know it exists for a few time only).
I’m afraid I’m not following what the EffectWrapper class is doing here. I tried adapting the example to the topic goal (a fullscreen effect, rather than a video texture) but I’m getting lost: https://playground.babylonjs.com/#ZLB7Z2#1