Yes, I quickly checked something like that. But then, you have to make sure it also does it correctly when returning to the other camera. I suppose you would need to store these offset values and pass them on to the camera. I’m also not sure that the alpha and beta values from the arcRotateCamera are kept the same (since they work from the target).
You could probably use something like Vector3.TransformCoordinates(arcCamera.target.position, arcCamera.getViewMatrix()); to get an offset for the ArcRotateCamera’s target and then apply that to your UniversalCamera’s position. Based on your PG’s code though, it might be good to either clone the camera positions or copy the float values to avoid clobbering the other camera’s position (setting one camera’s position vector to the other)
This function call is effectively what AbstractMesh.getPositionInCameraSpace is doing.