Ideas For Hidden Line Removal


This is not so much of a feature request at this stage, more of a brain storming / ideas.

I read

I have been thinking about it for a while, quite a change of perspective for me because I used to work with hidden line removal code professionally (Parasolid - CADCAM) but the thing is that was all with CADCAM style geometry (bsplines etc) not planar meshes - we did the hidden line removal geometrical stuff in the code whereas shading is all about letting the GPU unit do the heavy lifting.

I have some ideas but I need help from someone who understands webGL and the underlying stuff a lot better than I do.

Hidden line algorithms in CADCAM software are typically dealing with bspline surfaces and curves, a face is not a triangle but may be a very large expanse of area containing lots of detail but defined only by one surface and say four edge curves. Stitch a lot of these faces together accurately with layers of topological code and datastructure, define an inside and an outside and you have the starting point of what we call a solid boundary rep model.

Hidden line algorithms for this kind of model typically work in 2D and 3D simultaneously, the edge curves for the entire body are chopped up where ever they cross in the view, silhouette curves are also calculated and everything is chopped up against them as well. This chopping ensures that every segment of curve has a single visibility - its either 100% visible or 100% invisible. At this stage if you do not mind wasting huge amounts of CPU you simply pick the midpoint of every segment and fire a ray to the camera or eyepoint and see if you hit anything. If you do not then that segment is visible. The pure algorithm is very simple - real commercial algorithms contain layer after layer of performance code but the basic principle is very simple - chop and ray-fire.

When I started thinking instead about meshes I was still thinking in the mindset that the above paragraph suggests.

Then I started wondering how WegGL establishes visibility since apparently there is very little global knowledge, the code only considers a very localised geometry at any one time.

If you put your hand infront of your face then you self-obscure but this is a relationship between two parts of your body that are distantly connected - worse still if another person obscures you then we have two separate objects one obscuring the other but possibly they have a gap between their arms and some of your face (perhaps) can still be seen through the gap.

The problem for GPU based rendering as I understand it, is that global information is not taken into account - when we are rendering your face we do not take into account where your hand is, only a few triangles on your face at any one time.

Then I read about the painter algorithm - "don’t worry about what obscures what, don’t worry about intersecting one triangle with another[1] to see what part of the more distant triangle might be visible. Simply find all forward facing triangles (facet normal dot product against ray to eye) and simply draw them making sure closest triangles are drawn last". The painter’s algorithm has drawbacks - triangles that really intersect in 3D will not be drawn correctly - triangles that overlap in the view must be physically one strictly behind the other - so Z buffering is used but the principle is there - GPU’s don’t care about non-local ( hand face ) relationships because they simply overwrite ‘distant’ with ‘close’.

[1] I am talking here of intersecting triangles in the viewplane - intersecting in 2D - they may not intersect in 3D

This made me wonder if the components for hidden line removal might already be in place under the hood of webGL?

  1. Is it possible to determine the normal of ‘the other triangle’ at each edge of the triangle we are rendering? If so then use this to determine whether this edge is a silhouette edge - ie a triangle edge lying between a forward facing and backward facing triangle.

  2. Silhouette edges are the only ones we want to see. Edges between triangles that are both forward facing or backward facing are not of interest - they can be drawn in the background colour or not drawn at all.

  3. Paint the ‘interior’ of the triangle in the background colour - we want this !

Step three is the important bit - it is the one that deals with obscuration, we do not bother chopping potentially visible curves up in the view as per the CADCAM approach. We simply draw all edges that have the required silhouette property but crucially we over paint them with background colour when we encounter triangles that are closer in the view that occupy some of the same 2D projection space (overlap in the view).

Some might say this is limited since it requires that the picture is just like the one in the post I quoted showing a couple of blocks - is it a limitation that the ‘interior’ of triangles have to be the same colour as the background?

I suggest not - a hidden (removed) line picture is really derivative of what a line rendering of an object with the confusing detail removed - it stems back to paper drawn engineering diagrams, early CADCAM images were confusing, they were line based but you could see all the lines, this was before shading and modern GPUs were on the scene. What purpose is there in a hidden line picture if the interior of faces is in some way more elaborate than the plain background? Of course this implies that the background must also be plain.

I wonder if this is feasible in webGL terms?

Requirements seem to be…

  1. Ability to detect ‘silhouette triangle edges’ - lines bordered by two triangles where one is forward facing and one is backward facing.

  2. Ability to render with a texture that is identical to background (plain) colour, insensitive to lights and all else - basically “I’m blue !!! just paint me in #000099 alright!!!”.

  3. Ability to render triangle ‘interiors’ and non-interesting edges as per 2.

  4. Painter* algorithm / Z buffering that will take care of obscuration without us having to think about it - thats what I like about this idea - all the heavy lifting would still be done by the GPU.

In the above I am kind of mixing the idea of painting a triangle ‘interior’ and painting its edges - I realise the concept of a triangle ‘interior’ is inaccurate, I assume edges would be draw after the owning triangle and overwrite the boundaries of the triangle - perhaps in a different colour . That part must be “do-able” - unless the Meshlab options I use are not taking advantage of GPU - Meshlab can show triangle edges and ‘triangle interiors’ at the same time.

  • I know the painter algorithm is probably obsolete but I like it as a conceptual illustration.

Any thoughts?

Maybe unrelated but this is how the BoundingBoxRenderer works:

In a nutshell we render the mesh twice with CW and then CCW winding order

1 Like

Many thanks Deltakosh, I will see if I can find the code and take a look.

1 Like

Hi Deltakosh,

I did some code searching, first time I have looked at the internals.
This file seems to be the best model I can find to work with.


This already has most of the key components, as I understand it, it decides whether to draw a particular edge between two triangles based on whether the edge is judged to be “sharp” based on an angle supplied - “epsilon”

So to “morph” this file as a test bed the steps I envisage are…

  1. Get a reference to an eye-point in world space - the camera position.
  2. Instead of comparing the triangle normals with one another compare them with a vector from the eye to the midpoint of the edge - use this to determine if one triangle is facing the camera and one is facing away - draw edges for which this is true.

There are two situations which meet our criteria - one is where the backward facing triangle is infront and the other is that the forward facing triangle is infront - ( concave vs convex is all a matter of where the ‘insides’ are ) so there is a little more work to do to remove the convex edges. However the rationale of this approach is that we do not have to worry too much about self obscuration because we intend to render all triangles and non interesting edges in the background colour so we allow the GPU to do the heavy lifting there.

Even if we have a loose test that does not consider convexity - edges that pass that loose test will be automatically covered up later.

Perhaps a better way of looking at it is to say…

  1. Reduce the epsilon criteria as it stands to say “every edge gets drawn” - even smooth edges.
  2. But then add a condition which decides not to draw edges where both triangle normals point towards the eye or both point away from the eye.
  3. Allow GPU depth calculations to deal with obscuration.

The only problem I can see is that the ‘edge renderer’ code is currently run as a post process after the render of all the triangles - draw triangles first and then overlay edges.

The hidden line removal demands that edges and triangles are rendered at the same time, interleaved as it were - we want to do our selective edge rendering to occur at the same time as one of the parent triangles is being rendered and then allow depth testing / ordering in the GPU to cover some or all of that edge if it is fully or partially obscured by triangles other than the ones we have been considering. It does not matter if we end up rendering the edge twice because we have “forgotten” as long as we only render that edge when we are rendering one of the triangles it lies between.

In effect we want to piggy back on the depth testing of the GPU to get edge rendering correctly ordered or synchronised in the depth of field so that they can be covered up by triangles that are closer to the eye.

I am going to have a crack at it - probably will take a while since I have not yet compiled or built BabylonJS locally. I will probably start out adapting the file mentioned above to see where I get with that.

If anyone has knowledge about whether it would be possible to “morph” the given source code file so that it does not rely on a pre-render of triangles but does it on the fly along side the edges I would be grateful for a heads up. ( Babylon.js-master/src/Rendering/edgesRenderer.ts ).

I suppose a similar approach is to ask the GPU to render triangles (solid colour) and edges but to do a pre-process where only edges of interest receive a colour that is not the same as the background colour.

As mentioned in a previous post the whole thing relies on being able to render triangles in a colour with no fancy options ( reflection, emissivity etc) we want a completely flat colour #003355 say which is identical to the background colour and is invariant to lighting conditions otherwise the illusion of background colour ‘leaking’ through the interior of triangles will not work properly - but again this would require that the GPU draw edges at the same time it draws either of the triangles that share that edge.

I took a look at the existing edge functionality and for my examples it seems very fast, my models render with edges ‘on’ - without any noticeable time lag so I think at a performance level it looks perfectly possible to have real time responsive hidden line removal.

The main hurdle is probably going to be connected with getting synchronisation between edge render and triangle render so we get the desired brute force “cover up” to deal with non-local obscuration.

I think real time animation that appears to be non-shaded but instead a sketch with hidden line removal could look very cool - in a way its regressive since shaded is perhaps technically superior perhaps a little “old school” but it might have its own charm.

This seems a good plan: Start Contributing to Babylon.js | Babylon.js Documentation

I do not fully get why you need to pass as we have the depth buffer to reject pixel overdraw

Probably time to get some oil on my hands :grin:

1 Like

Update: I have started down the road of downloading the necessaries to start work on coding inside Babylon - I have not yet gotten oil on my hands as I a struggling with the build but I have received help and hope to be able to build locally soon.

The more I look at the file


The more I think its already pretty much what is required - basically it draws the edge between two triangles if the edge is “sharper” than a given amount.

I was thinking about hidden line removal options in CADCAM systems and what we actually want out of them.

Typically there are two types of line drawn.

  1. An edge or a silhouette curve where visibility changes. Silhouette curves arise in CADCAM because the basic unit of currency for ‘area’ is a face which can be a large complex bspline surface which may change visibility several times because parts of it can be pointing toward the eye (camera) and parts can be pointing away. Silhouette curves effectively split complex faces into regions of single visibility ( as seen from a particular view position). Meshes do not have this complexity because faces are now triangles, they are planar and can only be 100% front facing or 100% back facing or edge on.

  2. Edges between faces that are both front facing - for instance a line drawing of a cube might show a corner with 3 edges shown which lie between forward facing edges.

Usually CADCAM systems have a smoothness option - there may be 100’s of edges between smoothly joined faces (both front facing) that do not add meaning to the picture, they may just represent the size of sheet material that is riveted to from a plane body so very often smooth edges are considered irrelevant but sharp edges are taken to indicate something important about shape and so are drawn.

This makes me wonder whether hidden line removal might be seen as an existing extension of edgesRenderer.ts - it already has the smoothness option ‘epsilon’ - the addition is to say ‘also edges where visibility changes’

There is the caveat that the plan also relies on triangles being drawn in a monotone colour indistinguishable from the background

First results:

I took the file Babylon.js-master/src/Rendering/edgesRenderer.ts and made a few line changes - very crude.

I simply took the camera position minus the midpoint of each edge as one vector (normalised) and then calculated dot products of this with the face normals either side of the edge under consideration. I commented out the “sharpness” (epsilon) test and instead checked if the dot products were of opposite polarity indicating one front facing and one back facing - if so then draw the edge otherwise do not.

The next thing was to get the triangle faces to be the same colour as the background so that they can occlude edges in the depth test but do so in a way that makes it look as if they are not there and we are simply seeing the background.

I did not achieve this in the code - instead I was playing around with Babylon Inspector and clicked on Scene in the left hand filter panel and rendering mode in the right hand panel. I then selected the fog mode option to linear and set the colour for the fog to the same value as I have for clear colour this had the desired effect although I have not yet read the doc on “fog”. I have not investigated yet but it is interesting that fog seems to have the desired effect. If I had not chanced upon this the pictures would show a ‘solid’ white (shaded) material where you would expect bone and I would have had to contrast this by setting the lines currently shown in white to a contrasting colour like red.

One of the ribs shown is dense with lines - this mesh is a bit odd I need to check it out, probably some badly set normals - I am ignoring that for now - its the mesh at fault I know that, it always looks a little odd in shaded mode as well.

The downside of this method is that the hidden line removal “effect” is not dynamic ( judging by observed results only, it looks as if ) the code in edgesRenderer.ts is called only once and does not update when the view direction changes. This makes a lot of sense since sharp edges are not defined by view direction and so need calculating only once - but the conditions for hidden line removal are very much dependent on view direction since we draw only lines (edges) that lie between front and back facing faces and that will of course change as the view direction changes.

Its actually what I would call hybrid behavior because of course the occluding property of triangles is recalculated when the view changes - its part and parcel of ordinary shading.

The actual result obtained thus far might be loosely described as follows.

  1. Silhouette edges ( between front and back facing faces ) are calculated once and stored as a mesh, they are visible from any view but are not recalculated and so stay in position with respect to their “owner” triangles - just as if we had drawn the silhouette edges onto the model.

  2. Silhouette edges are correctly occluded (for the most part) when the model is viewed from different angles but the silhouette edges themselves are no longer correct for that view, we should have selected a different set, discarded the old line mesh and made up a new one.

Here we see the same model from the back - occlusion is working correctly but the actual curves being occluded (or not occluded as may be the case) are not correct for this view, they were calculated when the model was first rendered from the view position indicated by the first two pictures.

Thoughts to self / others

  1. Are the hidden line removed drawings worth pursuing? they are a bit “bitty” will probably look best on very refined (high density) meshes - might look better with tweaks etc. The third picture should be ignored of course when judging quality.

  2. Why does fog work so effectively? Simply not investigated yet. A guess would be that fog processing occurs after lines and triangles are drawn, fog perhaps ignores wire meshes and so just obscures the triangles but crucially by that stage the triangles have done their job of obscuring parts of wire edges. This does suggest that there is an opportunity within the code and GPU phases to get a cheap “now I am done with triangles so just obscure them (with whatever functionality is used for the fog feature) but allow edges not occluded by triangles show through”. My fog is very thick since the scale of my model is around 1000 and my camera is at a distance of something like 5000 - more than likely the bones didn’t make it through the fog - perfect !

  3. How easy would it be to get the code to throw away the edge line mesh it has created for an “old” view and recalculate the same for a new view? ie something short of the sophistication of real time animation with hidden line removal but at least something that could respond to a “refresh” command from the user or simply trigger when a camera moves or similar.

  4. In my case I had to set the line width to 300, I can see in the code that the number supplied is divided by a factor of 50 - my models are large in the Babylon space (1000 units tall) numbers seem sensible but I have not looked at this side of things carefully - just cranked the number up enough to get a decent result.

1 Like

Update: Just for fun I made the skeleton do a squat - the animation actually looked quite good.
This is because all joint movements in the squat (knees, ankles) are rotations about axes that are parallel to the view direction or nearly so, and all translations are parallel to the view plane - the skeleton remains “side-on”.

The silhouette edge conditions do not change very much when an object is rotated about an axis orthogonal to the view direction or moved parallel to the view plane, infact the only changes are down to perspective - for an orthographic view the silhouette edge conditions would not change at all.

The neat thing is that the occlusion by triangles does work properly and is dynamic - only the displayed edge curves remain unchanged - so for this special case a reasonable animation / video can be obtained, it is restrictive but at least gives an indication of what animation with hidden line looks like.

Fog has no impact on the edge rendering because a specific vertex/fragment shader is used to render the lines, and those shaders don’t take into account fog.

You can make the triangles disappear more easily (better perf) than using fog by disabling color writing before rendering your mesh:

mesh.onBeforeRenderObservable.add(() => engine.setColorWrite(false));
mesh.onAfterRenderObservable.add(() => engine.setColorWrite(true));
1 Like


mesh.onBeforeRenderObservable.add(() => engine.setColorWrite(false));
mesh.onAfterRenderObservable.add(() => engine.setColorWrite(true));

Thanks - :grinning:I just tried your suggestion out and it works perfectly, there were a few imperfections with fog that have been solved with this method.

Whoa cool amazing.

I cut the video down to a reasonable file size so its also super short but it shows the animation.

Note how although the silhouette edges are not being adjusted it does not matter too much since the view is not changing. Sure if the figure bends down far enough then we would see the top of the skull so there is a limit to how far this can be pushed but it works to a degree.

Also note that despite the fact that the curves are not being recalculated, the portions of them that are visible or occluded is being dynamically calculated so the hands appear correct as they interact visually with the legs.

1 Like


I like this very much. I often needed some functionality with rendering edges.

My dream would be additionally get the actual points to create an SVG file of it.
And the biggest dream was to also create views with a cut line.
So make a view plane that splits an object and shows the lines from that view.
I think it would use an orthographic camera that has it’s viewport in the object and the lines would intersect with the camera itself. I needed this a lot to create front and side views for customers.
I ended up calculating a lot of points from the triangles and making hull detection then sort it manually.
Still I was not really able to create such a cut through the scene and complex objects needed a 2D view-image.

Did you investigate further since may? Is it still interesting to you?