Blending LODS on the GPU?

So I was trying to come up with a way to do LODs with a Custom Material controlling the switching of the levels so that way I can blend between them and not have them “snap” between levels.

I am really close but for some reason have am not able to it just right.

If you move the camera in and out you will see its kinda working but is doing some odd stuff.

Any ideas?

You probably don’t want vEnabled to be interpolated, so use the flat qualifier with it. Note that it won’t work with the varying type, so you have to use out (in vertex) and in (in fragment) instead:

2 Likes

That got the artifacts to go away, now how do I get the correct shapes to appear?

Its like only the first LOD will display then when it should switch to the next one it just shows a white sphere. We never see the cube for some reason.

Any ideas on that? BTW thanks I never knew about flat.

You have to offset the indices for the meshes #2, #3, etc. Also, you had a bug when pushing data into uvs2, you pushed twice the data.

Last thing, you should discard the pixel in the fragment instead of setting alpha to 0, else you would need to enable alpha blending:

2 Likes

I had tried the discarding but did not think of offsets on the indices good call!

Thank you by the way.

Next question do you think this would be more or less performance then handling the lod switching on the CPU?

I am thinking about keeping the alpha part though so I can add smooth fade between LODs unless you can think of a better way.

I would think that handling LODs on the CPU is faster because you have to pass the data for all LODs to the GPU and all data are still processed by the GPU, even if some pixels are discarded. By doing it on the CPU you only render the right LOD. But I guess you will need to test to know.

Enabling alpha blending does not seem a good idea to me because you will need to enable it for all meshes (or at least all meshes that have LODs) and alpha blended meshes cost more on the GPU than non alpha blended meshes. Also, those meshes won’t be rendered in the depth buffer anymore and you may have some rendering artifacts because of that.

1 Like

@Evgeni_Popov I figured out a use for it!

So I have a scene that has a bunch of trees and foliage and instead of letting the LODs be calculated per node I am chunking the area and merging all of the nodes and their LODs into a single buffer and it seems to be fairly efficient and is allowing me to do more elements it seems then CPU based.

I’m stoked, thanks for your help.

update

Can you actually take a look at line 155 and toggle it to true, then help me understand why after I merge the meshes act differently then when that line is false and I keep those as two meshes.

In the scene that I was combining them all locally I did not realize that the entire zone was toggling like you see the behavior when like 155 is set to true.

So I added a offsetTexture and moved the lod information and pixel to sample for the offestTexture to the color channel. Everything works when the meshes are not merged, so Im kind confused what’s up.

You use a RGBA texture, so you need 4 components for each position:

Dude you are a god… I have no clue how you notice all the finer points like that. <3

Thank you sir. 1000 meshes all with 3 levels of LOD running at 5k absolute fps on a gtx640 pretty cool.

1 Like