I hope you’re doing well!
I’m trying to create a simple application with a cylinder modeled in Blender and exported to OBJ.
My goal is to align this cylinder (wireframe in the image below) with a 2D image in the background that contains a mug so that I can follow the mug’s shape as faithfully as possible.
However, I can’t find a technique to get the perspective right. Even using various values and the camera’s Projection Tilt option, the cylinder never correctly follows the image. If I get it aligned at the top, it gets incorrect at the bottom, etc.
I’ve tried the camera in orthographic mode, but without success, too (I suspect this mode is incorrect since the background photo has perspective).
Is there any other mechanism besides Projection Tilt that allows me to manipulate this perspective for better results?
Thank you very much in advance for the help!
hi please try uv map if you can customize your object
I’m not sure if this question is Babylon related, but I’m going to try to answer anyways. Also note that I’m not an expert in this area, but I did some side projects trying to do something similar and have done some research.
This is going into the realms of CV since you are more or less trying to figure out the camera extrinsics and intrinsics in order to properly place a 3D object that matches the image. This isn’t that simple. People usually calibrate a camera with some kind of calibration image to figure out the extrinsics and intrinsics or use some automated algorithm. If we can assume this image is produced from a calibrated camera, then you can use PnP and pick some points on the cup as the 3D points to match on the 2D image to find the extrinsics. That said, it looks like you are trying to find these parameters manually. It will likely take a bunch of trial and error to find the correct parameters for the camera.
@nasimiasl thank you for responding, I believe that, unfortunately, UV mapping would not solve this case because the only 3D element is the cylinder… the mug is just a 2D background image (I’m trying to apply an image to the cylinder to mix it with the background image of the mug)
@bghgary Thank you very much for the valuable information.
As you said, maybe the best way to make it work is to take a photo myself and place elements that help me identify the positioning of the elements.
However, I wondered if creating something with any image would be possible. I’ve been watching videos of how this is done in Photoshop, and there’s a tool called the Warp Cylinder. I believe that if I can reproduce the operation of this tool, maybe it will be possible to create something like this:
Once again, thank you very much. I will calmly read the articles you sent me to understand better how to proceed.
Have a good weekend!
Seems possible, but it is very manual and probably will be trickier with different perspectives. The referenced youtube example is easier because the camera is centered on the object.
If you have control over the camera being used, maybe marker tracking will be good enough? Just place the marker where the object is going to sit and get the object’s orientation from the marker. @RaananW and I showed a demo in a presentation on this last year that may help.
if you can make Clyinder and cant make UV for that you can use this system
this maybe help you
in this sample i find new UV base on Clyinder System
you can control position base on
float scaleY = 1.;
float scaleX = 0.5;
float posY = 0.5;
float posX = -0.5;
@bghgary Wow, I hadn’t thought about that. I’m going to do some experiments. I just need to figure out a way to later remove the marker from the images (since my ultimate goal is to apply the technique to videos). But certainly, for static images it already solves, because I can take a photo with the marker and another without.
Once again, thanks a lot for your help.
If the camera is in a fixed location, you can remove the marker after getting the marker’s orientation, then take the picture.
EDIT: I just reread your comment and you already said this, lol.
For video, maybe you can put an occluding plate or something similar as part of the bottom of the model such that it hides the marker completely.
@bghgary Thank you very much; I’ll try it.
EDIT: I managed to create something like Photoshop with a simple distortion shader, but it is very limited. I believe something fully 3D will be much better.
Is there any way I can capture the camera configuration (perspective, position, etc.) from an AR scene and apply it to another scene?
I’m not sure if this is correct, but I’m thinking of the following:
- with the camera fixed, I place a tracking image and use AR mode
- with the same fixed camera, I take a picture of the mug
- I create a scene with a cylinder
- in some way that I still don’t know, I use the data from the camera that I obtained with the tracking image via AR and apply it to this scene with the cylinder, transmitting the correct perspective and, in this way, being able to use the cylinder to project my images (with that I imagine I just need to adjust the cylinder position and scale, as the perspective is now correct)
Does it make sense that way?
EDIT2: I saw in your demo (the fan with AR) that you use setPreTransformMatrix to apply the transformations on the root node. As I understand this method, it applies a matrix before all other transformations. I’m not sure, but does this mean that the perspective of the object, calculated based on image tracking, is modified through this matrix? If so, it would just store it and apply it later to the object, correct (but I’m not sure about the camera in this case)?
@RaananW maybe can answer this better than me.
The AR (well, XR ) marker tracking is kind of a black-box for us - we say - with the current camera transformation, give me the transformation of the tracked marker, assuming the marker is N units long (i believe it is CM, but that doesn’t change a lot if it is m).
The way you apply this transformation to a specific object is your call - you can pass it to a different scene (as it is just a 4x4 matrix). Just be sure that the camera in the other scene has the same properties as the camera in the AR scene, as this transformation is relative to the current XR perspective.
I hope I answered your question, and will be happy to explain anything else if I haven’t
Hello @RaananW , thank you very much for the explanation. As I understand it, I need to determine my camera’s parameters (extrinsics, intrinsics, etc.) before applying the transformation matrix, correct?
I’m going to do some experiments with this.
Thank you in advance for your attention.