Get a frame buffer with the world normals

Hello folks,

Following my previous topic an additional question remains : how can I create a buffer with the fragment normal? (using or not GeometryBufferRenderer)(please).
I would like to keep the normal shading on the mesh but I need the data like when I use the flat shading.

Previous topic

Use GeometryBuffer to get normal sampler

All my different attempts result in a black screen.
I think it shows me the normal of like a plan in front of the camera because when I display the positions, I have the screen split into 4 with each quarter of the screen of a color (green, yellow, black, red in order) :jamaica:.


Thanks in advance!

can you produce a repro of your current state on the pg so maybe @julien-moreau can give you a hand?

I chose to try one last thing before creating a playground.
I use the positionSampler to calculate the normals in flat shading but I still keep the normal shading for rendering (so without using mesh.convertToFlatShadedMesh() ).

The shader works very well in Babylon CYOS ( but the rendering is awful (very pixelated) in my post processing :

I think it’s related to the fact that I use a postprocess to do the calculation and I don’t understand this part in the postprocess.vertex.fx :


const vec2 madd = vec2(0.5, 0.5);

void main(void) {	

	vUV = (position * madd + madd) * scale;
	gl_Position = vec4(position, 0.0, 1.0);


Without this, the rendering is only done on a quarter of the screen. (Sorry if that question seems silly to you :sweat_smile:)

Have a good evening! :slight_smile:

the mad is just a way to move from position (with x and y between -0.5, 0.5) to uv which are between 0 and 1

Do you want to create a PG so we can discuss with a real example?

Ok thanks for the info :slight_smile:.

Here is the example playground:

Reading this article ( for more informations about deriatives, my comprehension is:

  • only edges look “pixelated” because the difference in values is higher than its neighbors. So the result looks ok according to the usage of deriatives. We can see the diffrences, in your screenshot, in edges of the cubes and edges in your geometry in the middle of the screen
  • It is not reproduced using the CYOS because when the pixel shader is executed, the position value is only and only according to the current triangle of the geometry being rendered. So you’ll not have huge difference in values compared to the post-process whch uses the position sampler that can sample positions from another object (neighbors pixels can be taken from another geometry).

I’m not sure to be clear, so don’t hesitate to argue ^^

First of all, thank you for the answer @julien-moreau.

Indeed, it makes sense. What is limiting in my case is the fact that post-processing applies to a 2D plane (image) and not to a 3D world.
So I’m going to give up on that idea.

Do you have a suggestion for me regarding edge detection (internal and external) without tinkering with the 3D model please?
Maybe I should try the multi pass shaders but I don’t know if it’s the most optimal solution.

you can don’t use postprocess too

1 Like

I also read this article and looks really promising

I’m going to test it as I never tried edges detection and because I’m curious =D


Please, let us know :slight_smile:

1 Like