I have made an arrow by merging two cylinders, one for the shaft and one for the arrow head. You can see the basic result here:
What I intend to do later on, is to move this arrow by translations and rotations, through space. As the scene progresses, I will loose track of exactly where the “tail” (or “start”) position of the arrow is, and where the “head” (or “end”) position is. But at certain moments I may want to get the start and end positions of the arrow, i.e. the point at the arrow tip and the point at the other end of the cylinder base.
Is there a way to do this merely by calling on the merged mesh? Does this mesh keep track of “which end is which”?
I also just can’t seem to find out how to take the merged mesh and get its components – I’m not sure if I’m just not finding it in the documents or if it’s not supposed to be possible. I was thinking that, if I couldn’t find another way, I might just use the location of the head and shaft to infer the locations of the start and end.
Once meshes are merged, you can’t access them individually, as a single mesh is created by the MergeMeshes call. Note that if you enable multi-material support when doing this call, you will end up with as many sub-meshes as the number of different materials used by the meshes you are merging.
Anyway, in your case, you know what you are merging and the initial position of the elements, so, if, for eg., the initial direction of the arrow is vector (0,0,1), then you simply have to transform this vector by the world matrix of your mesh to get the direction of the transformed mesh.
So if I understand the solution correctly, then you basically need to maintain a pair: the mesh and its direction vector (perhaps even by making an object with both as properties, so that they’re always bound together)? And whenever you transform the arrow you also transform the vector, so that if you ever need to get the direction of the mesh you look at the direction of the vector?
Hm, how would this alternative work? It sounds like, rather than recomputing, you somehow memo-ize the sequences of transformations, and only when you need the direction do you compute the accumulated transformations?
Like say I send this vector orbiting around a point in an ellipse, with the arrow always pointing to the center. Maybe I do this to exhibit the force of gravity on a planet. Then maybe I want to grab its tail and tip coordinates at some moment, either to display them or whatever. You’re suggesting not to compute these coordinates at every frame of animation, since they’re not used at every frame. But instead to maybe just represent all of the transformations as a matrix product of all the accumulated transformations – and then when the tail and tip coordinates are needed, compute?
I’m probably not fully seeing what this suggestion is.
I’m not sure about your use case, but the world matrix of a mesh contains all the transformations used to transform the mesh from local to world space, so it’s “up to date” at all times and you can use it to transform the direction vector so that it points in the same direction as the mesh.