Hi!
I’m working on a QR code tracking thing for WebXR on the HoloLens. I’m basically almost there, I can recognize QR codes, calculate where they are in the space. Once one has been found I want to create an object in that location, however I find that objects that I place while already in XR mode (currently immersive-vr since immersive-ar doesn’t seem to work anymore on the HoloLens) are very jittery. While object that were already in the scene before entering XR are steady as a rock.
My assumption is that this is because anchors are made for the preexisting objects. Correct me if I’m wrong there.
However, when trying to create an anchor through anchors.addAnchorAtPositionAndRotationAsync(closestPoint, new Quaternion())
I get the following error:
OperationError: Failed to execute 'createAnchor' on 'XRFrame': Anchor creation failed.
I create the XR experience like so:
var defaultXRExperience = await newScene.createDefaultXRExperienceAsync({
uiOptions: {
sessionMode: supportedAR ? "immersive-ar" : "immersive-vr",
referenceSpaceType: "local-floor"
},
optionalFeatures: true
});
and directly after that I try and enable the anchor stuff:
defaultXRExperience.baseExperience.featuresManager.enableFeature(
WebXRAnchorSystem,
"latest"
) as WebXRAnchorSystem
I use what I get back from that to try and create anchors.
Am I doing something wrong? Or is there maybe another way to get the 3D models to not jitter as much when instantiated?
Thanks!
are you sure you start an AR session?
Well, no since that’s not been available for some reason. On my HoloLens 2
const supportedAR = await WebXRSessionManager.IsSessionSupportedAsync("immersive-ar");
gives false.
Here’s someone else seemingly experiencing the same, I made a report using the feedback hub but (I also tried reinstalling the entire thing, I’ve updated everything but the issue remains)
I was thinking I could continue with immersive-vr
instead but I guess this is one of it’s limitations.
yep, anchors only work on AR sessions
I see, that’s unfortunate. Thanks for the quick response!
I just checked with other WebXR environments that are not Babylon just to make sure it’s not a Babylon thing, but with the official WebXR check I get this:
I keep on forgetting whether or not I ever managed to get AR to work on the hololens. Let me check a few things and get back to you
@HedwigAR
Sorry to necro this thread
I can recognize QR codes, calculate where they are in the space.
Could you outline how this works?
I’m working on WebXR + Meta Quest 3 (but with ThreeJS) and overlaying reallife machines with live streamed data. I’m looking into doing the initial positional calibration with Markers or Fiducials, but am surprised to find, that although WebXR Raw Camera Access Module exists, it won’t be implemented in the Meta Quest 3. So from a WebXR environment, there seems to be no way to get at the camera data and thus no way recognize QR Codes.
How does it work in your case? Is that a Holo Lens feature? Does Babylon.js render AR, but without WebXR somehow and thus have WebCam’esc camera access? Is it WebXR and Hololens has WebXR Raw Camera Access Module implemented?
Hi!
Sure, happy to help. I’ve not got the golden bullet solution, I’m basically roughly guessing where the QR code is based on the webcam feed. So I get an x y back of where it is, and then I’m transforming that roughly into XR space.
I get the webcam feed by doing a ‘normal’ web thing:
navigator.mediaDevices
.getUserMedia(mediaStreamConstraints)
.then(stream => {
const canvas = document.createElement("canvas");
const video = document.createElement("video");
let { width, height } = stream.getTracks()[0].getSettings();
if (width) video.width = width;
if (height) video.height = height;
video.srcObject = stream;
video.play().then(() => {
loop(); // detect qr codes here
});
}
})
.catch(reason => {
console.error("Error while creating stream: " + reason);
stopped.current = true;
});
And then detect the QR codes by running that video feed through this library: GitHub - zxing-js/library: Multi-format 1D/2D barcode image processing library, usable in JavaScript ecosystem.
Here’s some more info on getting the feed from a webcam: MediaDevices: getUserMedia() method - Web APIs | MDN
I’m not sure if that would work in Meta stuff, I haven’t really worked with that ecosystem. Hope that helps!