I am about to start making mechanics of “waking up” monsters when they see the player. This should happen only when the player is somewhere in front of a monster’s face, but not behind it.
Let’s decompose the task a little. We have a point in space that represents monster’s eyes and there is a field of view for that point. As I understand field of view can be described with angle (horizontal and vertical, but let’s start with horizontal for now) and direction (where the monster is looking to at the moment). We need to detect if some point (aka
player.position) is withing mentioned field of view or not.
It seems like a math task and I am sure it’s possible to find or re-invent formulas on my own if I have to, but I am wondering if someone already did something like that with BabylonJS or maybe there are even some integrated functions already? The problem seems very common, so I would like to know best practices before implementing them on my own.
Alternatively if you just want to copy&paste code: FOV / Field of Vision for NPC AI
Yuka is worth a look in any case!
I figured it out and designed my own playground just to confirm that I got it. https://playground.babylonjs.com/#Z3JGUX
My playground actually implements several related mechanics.
function lookAtPoint makes mesh to turn and to look at the provided point.
function isInFov checks if something is in FOV of provided parameters. Function accepts
fovDirection - the direction from the center of FOV,
fovAngle - angle in radians,
directionToTarget - direction to the target from the center of FOV of the interest.
directionToTarget should be pre-computed, however in real-life scenarios the direction of the monster can be a property in the monsterInstance class (especially if monster movements are implemented already), so that value can be read directly. In my playground I compute it manually based on the point where the monster is looking at.
function checkIfPlayerIsVisible checks if the player is in the FOV and there are no obstacles between player and monster (like a wall). Helper function
getClosestHit also accepts an array of exceptions, that should not be counted as blockers. For example monster’s own mesh and any objects assumed as transparent like glass windows or floating medkits, etc.
MultiMaterial example (bonus). I needed to represent monster’s face somehow, so I choose that way.
The downside of the approach is that the player is visible only if it’s origin (registration point) is visible. So, if the player is very wide and part of it is in FOV, but not the origin, then the player is considered as not visible. Workaround: additionally, to player’s origin calculate directions to all player vertices and check them. If the number of vertices is big, the bounding box can be used (8 vertices). This approach still can be exploited though, for example if all vertices are covered and only some central part of edges/faces is visible. But in my opinion for first person shooters we can just use player camera origin, because if NPC can see the player and the player can’t see him (even if player turns the head to the right direction) it’s a bit frustrating.
Great findings, thanks for sharing @splash27!
I would say for the too-wide problem unless you visually show the fov to the player they never gonna notice Anyway, I think the change I am going to make will be like this: use FOV check against center. If false, iterate bounding box edges (min and max each with y at cetner.y?). On hit, return true and use bbox edge(!!) for the following raycast.
The partly-covered issue is extremtely annoying (think of partly transparent objects or holes) but could be resolved by using the GPU. Just a quick google