Is it possible to perform a grabFrame() call when in an AR session? I am trying to grab from the XRWebGlLayer that is used during the WebXR session.
function init {
navigator.mediaDevices.enumerateDevices()
.then((devices) => {
if (window.stream) {
window.stream.getTracks().forEach(track => {
track.stop();
});
}
devices.forEach((device) => {
console.log(device.kind + ": " + device.label + " id = " + device.deviceId);
if (device.kind === 'videoinput') {
constraints = {
video: { facingMode: { exact: "environment" } }
}
}
});
navigator.mediaDevices.getUserMedia(constraints)
.then((stream) => {
window.stream = stream; // make stream available to console
videoElement.srcObject = stream;
track = window.stream.getVideoTracks()[0];
var imageCapture = new ImageCapture(track);
interval = setInterval(() => {
imageCapture.grabFrame()
.then((imgData) => {
canvas.width = imgData.width;
canvas.height = imgData.height;
canvas.getContext('2d').drawImage(imgData, 0, 0);
})
.catch(err => console.error('grabFrame() failed: ', err));
}, 1000);
})
.catch(handleError);
})
.catch((e) => {
console.log(e.name + ": " + e.message);
});
}
@RaananW may probably help you for this one.
Hey, sorry - was away.
What information do you need? The XR WebGLLayer is a private member of the xr session manager. It is also a member of the renderTarget public member of the xr default experience helper.
If you need to camera details (or - what the device “sees” - this ifnormation is, as far as I know, not available.
Thank you for the response. We wanted to use the camera image during the XR session to use in a js computer vision library that works with the 2d camera image. I found a few docs requesting an image grab feature in WebXR. It looks like something they are working for future releases.
The request
opened 08:21AM - 13 Jun 19 UTC
closed 07:43PM - 17 Jun 19 UTC
Since chrome implemented the new "immersive-ar" mode, and removed the "legacy-inline-ar" one, I can't figure out how to access the video...
Here is someone’s notes on what they accomplished.
opened 08:00PM - 30 Mar 19 UTC
One of the great advantages of AR is that one is able to extract information from the background in order to...
# Modularising the WebXR Device API
As discussed at the June face-to-face meeting, we are investigating how we could refactor the WebXR Device API into modules, in order to make progress independently in each module.
# Why should we modularise?
The point of modularising our work is to structure our work to make more consistent progress, and enable multiple features to make progress independently. Note that this segmentation is NOT a replacement for incubation of features - all these features will still go through the process of incubation prior to entering “WG deliverable” status. This proposal is specifically detailing how to structure all the work across the CG and the WG, not how to progress that work through the process (that will come in a later proposal).
First, we should define our logic for when to have module boundaries. This is a gray area; none of the statements below are intended to be absolutes. There are three main motivations for splitting an area of functionality into a separate module:
* **Standalone feature:** The functionality is architecturally a standalone feature that may be designed largely separately, and might even be reused in different contexts (e.g. a Lighting Estimation API).
* **Architectural layering:** The functionality is architecturally a low-level layer that multiple other functional areas will build on
* **Velocity:** One area of functionality will take longer or is a much more advanced set than another related feature, or it involves a different community of feature designers. (e.g. a HUD DOM layer for smartphone AR cases might be much quicker to define than a complete multiview layer feature.)
We should be clear that we do not want to modularise simply for the sake of modularising; there is a cost to each finer granularity. We would like to start with a relatively minimal set of modules, as we believe it will be easier to break modules apart in the future than to recombine them.
This file has been truncated. show original
1 Like