Avoid spawning camera in mesh

In my scene, I have some meshes which are randomly placed and rather big. I have my camera set to a certain position, but sometimes there is a mesh at that position. How could I move the camera to the nearest / a close-by open space? Checking if it intersects with one of the meshes shouldn’t be a problem for me, but I have no clue how to figure out a place where I can put it without intersecting any mesh. Of course I could try random positions over and over until I find a good spot, but that seems dumb.

I don’t know the whole context but could you invert the problem and not allow the meshes to be placed at the Camera position?

Not a full answer but, can you use the mesh bounding volume and place the camera outside? Can multiple meshes be placed next to each other? That could cause issues too

1 Like

Inverting the problem is a very smart idea, but sadly won’t work in my scene.

Some more context: I get the data for where to put the meshes from an API, so they’re fixed and different with each call. However, the positions are grouped into “clusters”, and I put one invisible collider-box around that cluster for collisions with the camera. I was planning to use those collider-boxes to check if the camera can spawn at a certain position.

About the mesh bounding volume, that seems like it could work, I’ll look into it

What type of camera is it? I suppose for a free or custom cam, you could check on the meshes bounderies and account this when spawning the camera. Probably the same with an orthographic cam. The only question here is whether you should substract them or add them to the camera position (in other words, should the camera always look at the mesh from a same angle). In case an arc rotate camera, I guess it would basically work the same except that to calculate your distance radius, you would need to first have the bounderies and then calculate the distance from the target to add or substract this to your radius from target.
Of course all this, assuming that you have tried the default behavior for the camera not going through a mesh and that it does not work when you spawn the camera straight at object location? I haven’t tried this but I believe you did?

1 Like

So I tried this and I got something that works, although it doesn’t feel elegant. I have a mesh with thinInstances spread throughout the scene, so I take the boundinginfo of that thinInstanced mesh and check if it intersects with camera position. If it does, I move the camera in the opposite direction from the center of that boundingInfo for 15 units.

It’s a UniversalCamera

I’m sorry but I don’t understand what you mean by default behaviour. About spawning at object location: I have thousands of meshes, and they’re not in a certain order. If I spawn it at the location of one of those meshes, it is stuck inside that mesh. Also, I can’t just move it away from that mesh for 1 meter or so since their might be another mesh there.

If you don’t want to test all meshes, you’ll have to use some kind of spatial data structure, like a Octree: Octree - Wikipedia or even a simple grid. You’d first find the cell where the tentative camera position is, and check if that block has a mesh. If it doesn’t, you’re done. If it does (you can refine this step by also checking if the position actually intersects the meshes inside that cell), then you can search the neighboring cells, etc etc, until you find a free position.


Okey great, that seems like what I need. I’ll have to look into Octrees and how to use them in my scene like you suggested, because it seems a bit advanced.

Sorry, didn’t make myself clear enough here. It’s not really a default behavior, you need to add it to your camera settings. What I wanted to say is that it’s already in, not a custom behavior.

As I said, never tried it when spawning a cam at mesh location.
Anyways, looks like you found a solution for it. I’m glad you did and wish you a great day :sunglasses:

1 Like

The good thing about octrees is that once you start using them you’ll see they’re useful for a looooooot of problems! :smiley:

Is there some guide how to choose the better settings of generated octree?
In the docs Optimizing With Octrees | Babylon.js Documentation I just see default parameters of capacity and maxDepth. But how to tune them to better performance? Just random search?

The way the octree division works is that, if a single block is at the maximum capacity, and another mesh is added to the block, the block is subdivided until maxDepth is reached: Babylon.js/octreeBlock.ts at master · BabylonJS/Babylon.js (github.com). This redistributes the existing meshes in the newly subdivided blocks: Babylon.js/octreeBlock.ts at c58538f51f8ae14aca487db993a64afab3efe285 · BabylonJS/Babylon.js (github.com). The deeper you subdivide your octree, the less “area” your blocks cover, so you’ll have to test before finding in which block a mesh should be put into, but, once you find the correct block there will probably be less meshes sharing the same block. So it depends on what you’re doing with your meshes, if it’s a very cheap operation like checking a bounding box, a less deep octree would suffice. But if it’s something expensive like exact ray-mesh intersection, it’s better to have a deeper octree with lower capacity nodes.

1 Like