Maybe even better: just update the EffectRenderer class to only use the internal textures for the rendering and have the ability to retrieve the latest texture used, meaning the output texture.
The benefit over EffectRendererPingPong would be that we could keep an array of effects for the call to render.
Interesting. I can definitely see how a Post Effect class with easy access to the previously-drawn frame could have a lot of uses. I notice that @Evgeni_Popov 's Playground only runs in the current BJS beta, in WebGL 2 browsers, which unfortunately may rule the method out for this particular project of mine. (iPhone is a primary target.)
I’ve been working on finding better methods for this for days. It has really been struggling trying to get it to work with a simpler setup.
Ideally you would just have a procedural texture or something and you could bind its internal texture onto itself like @Evgeni_Popov is right on point with his PR.
Note however that because we pass more than one EffectWrapper, the EffectRenderer will create two internal textures for the ping pong.
As you must already create two textures on your side, it could be better for resources sake to only use those textures, which is possible by calling Render two times in a row, with a single wrapper each time:
This is huge dude now I can do convection simulations, water flow, etc…
I have been trying to get someone to answer this question for a long long time now… and was starting to kinda get sad about not being able to figure it out easy. Plus anytime I asked any questions on it nobody really knew how to help. THANK YOU!
Np, I may have read your questions but without really understanding what you wanted to do, or I didn’t know about EffectRenderer at that time (I know it exists for a few time only).
I’m afraid I’m not following what the EffectWrapper class is doing here. I tried adapting the example to the topic goal (a fullscreen effect, rather than a video texture) but I’m getting lost: https://playground.babylonjs.com/#ZLB7Z2#1
first frame: rttA is the texture red from (in the shader, sampler lastFrame), rttB is the texture written to
second frame: rttB is the texture red from (in the shader, sampler lastFrame), rttA is the texture written to
and so on
So, at each frame, the previous output (written) texture is used as the source texture, and the previous source texture becomes the output texture.
For this to work, however, you have to put something in the very first texture red from (rttA)…
In the PG, I make a rendering of the current scene, so it’s a read cube at the center of the screen.
Then, the shader simply samples the source texture and applies a coordinate scaling of 0.998 so that you can actually see some changes: the effect is a kind of a blurring toward the upper right corner of the screen.
Note also that it is working because the source / output textures are never cleared! So, the frames get accumulated one over the other, with a tiny shift each time (because of the 0.998 factor).
Thanks, Evgeni, but I must not be communicating the topic goal very effectively. If we remove the scaling factor from your Playground example, we simply get a frozen cube, rather than a blur of the spinning cube. What I’m trying to accomplish is something like this. (YouTube link)
None of my experiments are working, but in BJS–as best I can tell–we can’t simply blend the current pixels we’re drawing with what we drew for the last frame. It would have to be done like so:
Render scene to Texture A.
Multiply Texture B by 1 - OpacityFactor.
Multiply TextureA by OpacityFactor.
Add TextureA to TextureB.
Display TextureB onscreen in place of the rendered scene, either with a custom PostEffect or some built-in method I’m not familiar with.