Offsetting world (to avoid z-fighting) when using Octrees

Hi, long time no see, quick question!

I’m working on a game with a very large world, so to avoid z-fighting and other precision issues, I need to do that thing where you keep the player/camera at the origin and move the world, rather than the other way around.

This is easy enough, if I put all my world meshes in a parent mesh and move that around. However I’m using Octrees for culling, and I guess octreeBlocks live in world coords and cannot be parented.

Is the only solution for this to update all the octreeBlock bounding vectors every frame, to keep them aligned with the world offset? Or does BJS have any build in magic to make it easy to avoid these kinds of precision issues?


yup I guess you are right and you would need to update every frame.

Is it not possible to have a kind of mix in your experience like the cam move around per zone and when switching zone, you recenter the cam in the next one so it only updates on area switch ?

Thanks for confirming! Yes, I will probably do what you describe - just wanted to check if there was a built-in way to do it.

not that I know of at least and looking at the code, not as well.

This post really sent my brain into max warp for about 45 sec. What a rush! But then it was clear that I don’t see any way to currently avoid updating every frame other than to be clever such as @sebavan is guiding you.

However, I would like to ask the forum if they believe that we are perhaps in need of a Subtree function addition to the Octree function? I’m guessing that there are very few of you who know what a Subtree is in relation to rendering. I never thought I’d say this, but time does have it’s rewards. When working on films in the 90s, my renderer of choice was Mental Ray (Mental Images, Germany). They implemented 3 methods for firing rays, and my #1 go to was Subtree rendering.

This is a method where you set a few parameters for dividing render space which can be updated frame by frame. By default, you could set the initial subdivision of the camera plane, and once a mesh is detected - how many subdivisions from the root detection would you sample in space to properly rasterize and shade the object.

The code is well published, although quite an effort to implement into a renderer. But I’d love to see this in BJS as this method provided me with the ability to often reduce an hour per frame render to 20 minutes. I know you must think these were lengthy render times, and they were. And this was using and SGI Challenge and Onyx. Man, do I miss those days.

Anyway, I welcome anyone’s thoughts… especially if you’re new - don’t wait for the big guns - if they have time to respond.