Hi guys, I have a need to render my scene by sampling the current scene as a 2-D texture and then do some post-processing with it, then finally render the actual scene based on that texture. Can I use shaders and frameBufferObject to do this in BABYLONJS? Haven’t seen any tutorials in our official website or any documentations.
You don’t need to access FBO directly (which is specific to WebGL - these don’t exist in WebGPU), you can probably do what you want with regular post processes and/or using the camera.outputRenderTarget
property.
If you are able to setup a repro (even if not fully working), we can probably help you achieve what you want.
Thanks for the hints!
I just need to render the edges of a given shape(any mesh with given vertices and indices), just like the function ‘enableEdgesRendering’ in Babylonjs! The only difference from ‘enableEdgesRendering’ is that the rendered edges shall not be obstructed by any mesh, even the target mesh itself(I saw the PG example in 'enableEdgesRendering’page, in which scene the edges are obstructed by every mesh that alpha = 1).
I am very happy to use any method that could achieve my goal. Using FBO is just an idea I came up with the first, and I could not think of other ideas.
You can do it by updating the shader used by the edge renderer to render the lines:
Setting the depth function to BABYLON.Constants.ALWAYS
will disable depth checking, so the lines are always be drawn.
Thanks for your advice! Yet I still got one more question about RenderTargetTexture.
I am currently tempting to use RenderTargetTexture to do a RTT effect. As you can see in this picture, I setup 2 scenes and use ‘outputRenderTarget’ and ‘RenderTargetTexture’ to capture the rttScene pixels as a texture. I have the following code:
// RTT
const rtt = new BABYLON.RenderTargetTexture("rtt2", {
width: MyScene.engine.getRenderWidth(),
height: MyScene.engine.getRenderHeight()
}, MyScene.tmpScene)
rtt.activeCamera = MyScene.tmpCamera;
if (MyScene.tmpCamera && MyScene.fullScreenMaterial) {
MyScene.tmpCamera.outputRenderTarget = rtt
MyScene.fullScreenMaterial.emissiveTexture = rtt
}
My question is can I do some EDIT to this variable ‘rtt’ ? I wish to do some operation that has the equal effect just like the fragment-shader can do with an input sampler2D image in WebGL.
// Effect I wish to do
rtt = doSomeEffect(rtt);
function doSomeEffect(rttSampler) {
// how to do equal effect as following glsl?
uniform sampler2D rttSampler
varying vec2 vTexCoord
void main() {
vec4 result = texture(rttSampler, vTexCoord);
// detect green rectangle and edit these pixels
if (result.g === 1) {
result.r = 0;
result.b = 0;
result.w = 0.5
} else {
// detect other normal pixels
result.w = 0;
}
FragColor = result;
}
return rttSampler;
}
Further more, I wish to combine two rtt textures’ pixels together into one rtt, if I succeeded doing the processing step above. Is there a way to do it?
If I understand correctly, I think you will have to apply a post-process on your rtt :
Thanks again! This module seems to be way too much to learn for now, but I would start reading it instantly and see what I can do.