Screen Pixels to Camera Distance

I have been looking at how to set camera distance using some world tracked points in screen space and found the following formula:

source
D’ = (W x F) / P or camera.position.z = (avg iris width: 11.7 * camera focal len: 3.56) / scene iris width in pixels)

The formula depends on having a known width (in this case distance between eyes), and converting it to a distance in mm using the camera focal length and number of onscreen pixels. The result works relatively well at a square aspect ratio, but has an error which goes from negative to positive depending on canvas aspect ratio.

Is there a better value to be using than my webcam’s focal length to determine a scene factor for my on-screen width? It would be helpful to be able to generalize the camera focal length (as it is not available for query in the browser).

r

I wonder - the formula doesn’t have any relation to the scene’s height. why do you get different results when the aspect ration is different?

Can you explain the exact use-case? It seems like you are setting the distance from 0,0,0 (or, well, the target you have set) , based on the scene’s width? Is it some forum of a “responsive” way of rendering the scene?

Hey, and I am using a webcam image as a fullscreen background layer with a resize function which scales from portrait to landscape using canvas aspect (thread). I would like to estimate the distance of my user facing camera from my content using iris width. I have both (iris center) points tracked in 3d and have converted them to screen pixels. The scene has been conformed and is in metric units.

1 Like

I found this and it looks like a viable solution (link).

Thanks again,
r

Here is a playground which shows the two working together. The results I am getting differ from the example, and are affected by canvas aspect.

1 Like

It feels like there is some misalignment between the input video’s aspect ratio and the canvas aspect ratio. I also assume( and only assume here!) that unless you are using projection to get the canvas points in your world coordinates, using the actual canvas size is pixels is wrong if you are scaling the mesh.
These kind of algorithms always make some assumption(s) regarding the scene. You will need to know the Iris size (which I assume you have as a constant?) or the camera’s fov (which is extremely hard to get without calibration).
Sorry I can’t really help - it will take me quite some time diving into the entire code :slight_smile:

Hey, and thanks for looking regardless.

The distance turned out to be correct, but there were two errors being introduced. The first being caused by the position of the camera relative to the facemesh points (the points are measured from the center of the head and distance z needs to be added). The second coming from cases where the background aspect was greater than the video (as I am scaling the bg layer to match portrait mode).

The algorithm relies on camera intrinsics, which I am not sure are an easy fit for a browser, but it works.

Thanks again,
r