I’m currently prototyping a 2D arcade space shooter with BabylonJS. For proper presentation, I chose an orthogonal camera to have a top-down view which works really fine.
The problem I’m currently investigating is the collision system. To detect collisions properly, I align all objects on the same Z-level. Doing this and due to the fact that the player should be able to fly “over” other objects, I arrange the game objects using different rendering groups. But how does the usage of rendering groups affect the performance of the game?
Also, using this solution, it’s barely possible to have an object that has parts which belong to different rendering groups, e.g. a space station where the landing pad is below the player but other parts should be above. I try to overcome this by using only one rendering group but grouping the different objects on different Z-levels. But now - of course - collision detection is not that easy. I came up with the idea of having some kind of shadow objects that are all on the same Z-level but then I have to do all rotation/position changes of the original object to the shadow object, too, which I guess is even more complex to calculate.
Here’s my answer in another thread regarding the rendering groups:
Aren’t your meshes in 3D space? If yes, the z-buffer usage should handle the correct rendering order of your meshes without a need for rendering groups / doing tricks with the z component? Thay way, you could use the regular collision detection system.
thanks for your answer! Yes, this would work for flying over and under objects, but then the firing mechanism and collision control of bullets and objects would become an interesting topic. One solution to this could be to add very high (in terms of z-buffer) invisible boxes to the bullets so it would hit anything on the x/y axis, no matter what their z-value is.
I guess I have to figure out which of both solutions fits my needs more and benchmark the performance impact on them.
You could project the 8 vertices of the bounding box and take the min/max of x/y for a crude 2D bounding rectangle. If you need more precision and pixel perfect collision detection, then I don’t see an easy way to do that… I think the best way would still be to have your objects in 3D with correct z positionning for the regular collision system to work. If that is a problem for rendering order that the z-buffer can’t handle, maybe split some objects to dispatch each parts in different rendering groups?
Thanks for showing me the options and for clarifying the rendering groups topic. I think for now I will stick with using different rendering groups (because of making sure that every object is properly aligned in the z-axis and probably running into issues with the max distance of the camera are then two things I don’t have to care about). Only if I run into severe performance problems, I try the other way with having the object “stacked” in z-axis with huge bounding boxes for bullets to allow an easier collision detection no matter where the objects are.
One side question regarding bounding boxes and collisions: Currently, I define additional meshes for every object that are not that detailed as the object is but more detailed than the rough bounding box. Then I use these meshes for collision detection. Using a “real” 3D space would give the opportunity to use moveWithCollisions as it avoids ghosting (meaning that if one object is too fast, it just runs through the other instead of hitting it), am I right?
I would add ghost objects parented to the various rendered meshes.
So, you can have a smaller bounding box for the ship than what is displayed.
Then, I would check with intersectMesh the various ghost meshes (Ship ghost vs bullets, bulltets vs enemies, …)