Camera stream on WebGL

What should I do if I need to put virtual shapes, say planes, in devices camera’s stream?
I have come across this Add Video Texture - Babylon.js Documentation.
However it mentionsa video or camera stream as a part of a mesh in a scene.

What if I need the video the whole thing, and not just a part of an object in the scene, and some meshes, say planes, be a part of it by being visible in certain locations I put just inside the video stream?

Pinging @syntheticmagus

Hi fselcukcan,

Depending on exactly what effect you’re going for, this could be either quite simple or pretty tricky. :smile: If all you’re trying to do is have a video in the background with various other elements in the foreground, you can do that with just a VideoTexture and a Layer.

https://www.babylonjs-playground.com/#X4FPH2

With something like that, you could have whatever meshes you want in front while the video plays in the background (a webcam feed is used in this example). You’d have to add your own mechanism for synchronizing what the foreground and background are doing, if that’s something you wanted; but the basic technique is quite simple.

If, however, you’re wanting to do things in the foreground based on what’s happening in the background video (and without pre-set scripting/processing), that gets into the realm of computer vision/AR, which is a bit trickier.

https://www.babylonjs-playground.com/#853M7X#12

This is an experiment we’re using to explore some AR capabilities in Babylon. Try clicking somewhere in the video feed (ideally somewhere visually identifiable, not just a spot on a blank wall, for instance) to pin an annotation to that point in the video. This experiment helps to show what’s possible (and what’s required to make it so), but keep in mind that this is a very early experiment and not something to take a real dependency on yet. However, if that’s the sort of capability you’d be interested in, definitely let me know!

Examples are very good. Thanks a lot.
I think the scond one places the png texture wherever user taps. Is that right?

I would like to explore the first one more.

I would like to put camera stream full scene and screen width, as if it is the camera application and with full quality camera offers. Then I would like to get users geolocation and other geolocations from possibly a server and show them inside the video.

  • If they are close scale them up, if they are far away scale them down and till not show at all.
  • Also the cameras angle (I think called “heading”) on the horizon (around cart. y axis, the perpendicular, gelologically the z axis or polar r) is also important and the angular span it sees. So, I would show only objects that are in that angle.

I do not know if the orientation of the camera can be taken from javascirpt or at all in both polar angles since it will be in upright position in use for camera too see environment.