Hi,
This is not so much of a feature request at this stage, more of a brain storming / ideas.
I read https://forum.babylonjs.com/t/wireframe-hidden-line-shader/5171
I have been thinking about it for a while, quite a change of perspective for me because I used to work with hidden line removal code professionally (Parasolid - CADCAM) but the thing is that was all with CADCAM style geometry (bsplines etc) not planar meshes - we did the hidden line removal geometrical stuff in the code whereas shading is all about letting the GPU unit do the heavy lifting.
I have some ideas but I need help from someone who understands webGL and the underlying stuff a lot better than I do.
Hidden line algorithms in CADCAM software are typically dealing with bspline surfaces and curves, a face is not a triangle but may be a very large expanse of area containing lots of detail but defined only by one surface and say four edge curves. Stitch a lot of these faces together accurately with layers of topological code and datastructure, define an inside and an outside and you have the starting point of what we call a solid boundary rep model.
Hidden line algorithms for this kind of model typically work in 2D and 3D simultaneously, the edge curves for the entire body are chopped up where ever they cross in the view, silhouette curves are also calculated and everything is chopped up against them as well. This chopping ensures that every segment of curve has a single visibility - its either 100% visible or 100% invisible. At this stage if you do not mind wasting huge amounts of CPU you simply pick the midpoint of every segment and fire a ray to the camera or eyepoint and see if you hit anything. If you do not then that segment is visible. The pure algorithm is very simple - real commercial algorithms contain layer after layer of performance code but the basic principle is very simple - chop and ray-fire.
When I started thinking instead about meshes I was still thinking in the mindset that the above paragraph suggests.
Then I started wondering how WegGL establishes visibility since apparently there is very little global knowledge, the code only considers a very localised geometry at any one time.
If you put your hand infront of your face then you self-obscure but this is a relationship between two parts of your body that are distantly connected - worse still if another person obscures you then we have two separate objects one obscuring the other but possibly they have a gap between their arms and some of your face (perhaps) can still be seen through the gap.
The problem for GPU based rendering as I understand it, is that global information is not taken into account - when we are rendering your face we do not take into account where your hand is, only a few triangles on your face at any one time.
Then I read about the painter algorithm - "donât worry about what obscures what, donât worry about intersecting one triangle with another[1] to see what part of the more distant triangle might be visible. Simply find all forward facing triangles (facet normal dot product against ray to eye) and simply draw them making sure closest triangles are drawn last". The painterâs algorithm has drawbacks - triangles that really intersect in 3D will not be drawn correctly - triangles that overlap in the view must be physically one strictly behind the other - so Z buffering is used but the principle is there - GPUâs donât care about non-local ( hand face ) relationships because they simply overwrite âdistantâ with âcloseâ.
[1] I am talking here of intersecting triangles in the viewplane - intersecting in 2D - they may not intersect in 3D
This made me wonder if the components for hidden line removal might already be in place under the hood of webGL?
-
Is it possible to determine the normal of âthe other triangleâ at each edge of the triangle we are rendering? If so then use this to determine whether this edge is a silhouette edge - ie a triangle edge lying between a forward facing and backward facing triangle.
-
Silhouette edges are the only ones we want to see. Edges between triangles that are both forward facing or backward facing are not of interest - they can be drawn in the background colour or not drawn at all.
-
Paint the âinteriorâ of the triangle in the background colour - we want this !
Step three is the important bit - it is the one that deals with obscuration, we do not bother chopping potentially visible curves up in the view as per the CADCAM approach. We simply draw all edges that have the required silhouette property but crucially we over paint them with background colour when we encounter triangles that are closer in the view that occupy some of the same 2D projection space (overlap in the view).
Some might say this is limited since it requires that the picture is just like the one in the post I quoted showing a couple of blocks - is it a limitation that the âinteriorâ of triangles have to be the same colour as the background?
I suggest not - a hidden (removed) line picture is really derivative of what a line rendering of an object with the confusing detail removed - it stems back to paper drawn engineering diagrams, early CADCAM images were confusing, they were line based but you could see all the lines, this was before shading and modern GPUs were on the scene. What purpose is there in a hidden line picture if the interior of faces is in some way more elaborate than the plain background? Of course this implies that the background must also be plain.
I wonder if this is feasible in webGL terms?
Requirements seem to beâŚ
-
Ability to detect âsilhouette triangle edgesâ - lines bordered by two triangles where one is forward facing and one is backward facing.
-
Ability to render with a texture that is identical to background (plain) colour, insensitive to lights and all else - basically âIâm blue !!! just paint me in #000099 alright!!!â.
-
Ability to render triangle âinteriorsâ and non-interesting edges as per 2.
-
Painter* algorithm / Z buffering that will take care of obscuration without us having to think about it - thats what I like about this idea - all the heavy lifting would still be done by the GPU.
In the above I am kind of mixing the idea of painting a triangle âinteriorâ and painting its edges - I realise the concept of a triangle âinteriorâ is inaccurate, I assume edges would be draw after the owning triangle and overwrite the boundaries of the triangle - perhaps in a different colour . That part must be âdo-ableâ - unless the Meshlab options I use are not taking advantage of GPU - Meshlab can show triangle edges and âtriangle interiorsâ at the same time.
- I know the painter algorithm is probably obsolete but I like it as a conceptual illustration.
Any thoughts?