Following my previous topic an additional question remains : how can I create a buffer with the fragment normal? (using or not GeometryBufferRenderer)(please).
I would like to keep the normal shading on the mesh but I need the data like when I use the flat shading.
All my different attempts result in a black screen.
I think it shows me the normal of like a plan in front of the camera because when I display the positions, I have the screen split into 4 with each quarter of the screen of a color (green, yellow, black, red in order) .
I chose to try one last thing before creating a playground.
I use the positionSampler to calculate the normals in flat shading but I still keep the normal shading for rendering (so without using mesh.convertToFlatShadedMesh() ).
The shader works very well in Babylon CYOS (https://cyos.babylonjs.com/#Y4TI8V#2) but the rendering is awful (very pixelated) in my post processing :
only edges look “pixelated” because the difference in values is higher than its neighbors. So the result looks ok according to the usage of deriatives. We can see the diffrences, in your screenshot, in edges of the cubes and edges in your geometry in the middle of the screen
It is not reproduced using the CYOS because when the pixel shader is executed, the position value is only and only according to the current triangle of the geometry being rendered. So you’ll not have huge difference in values compared to the post-process whch uses the position sampler that can sample positions from another object (neighbors pixels can be taken from another geometry).
I’m not sure to be clear, so don’t hesitate to argue ^^
Indeed, it makes sense. What is limiting in my case is the fact that post-processing applies to a 2D plane (image) and not to a 3D world.
So I’m going to give up on that idea.
Do you have a suggestion for me regarding edge detection (internal and external) without tinkering with the 3D model please?
Maybe I should try the multi pass shaders but I don’t know if it’s the most optimal solution.