Get OpenXR working on Oculus Quest

I have been successful in getting BabylonNative to run on a Quest2. It is just a 2D window though. I realize that right now Hololens 2 / Windows is the only thing supported for XR. The whole point of OpenXR is to be multi-platform though, so I started looking into it, as well rummage through the files on the repo.

Oculus does have an SDK for OpenXR:
https://developer.oculus.com/downloads/package/oculus-openxr-mobile-sdk/

It is essentially a bunch of .so files to link in. While the BJS-Native is bringing in source right from the KhronosGroup OpenXR SDK repo. I am not sure if the current build of my app is even including openXR. Is it?

I did do one experiment, which definitely had an effect. I added a line into the AndroidManifest.xml that their SDK requires to tell the device it is a VR app. I put it on a branch I made on my fork. I now get the “Guardian” boundary that you see when your app goes into VR mode. The whole system is un-responsive other than that though. I had to restart.

Does anyone think BJS-Native have a chance of working on Oculus, or how I might proceed next?

Pinging @syntheticmagus

Hi JCPalmer,

We definitely will get Babylon Native to run on Oculus devices — that is absolutely an objective — but I’d be very surprised if it worked out-of-the-box right now, especially on the Oculus Quest. Right now, Babylon Native is making a number of assumptions based on platform; these aren’t the correct assumptions to be making long-term, they’re just stopgaps we’ve been using as we bootstrap the capabilities. One of these assumptions is that there’s only one XR path per platform, which means that since (as I understand it) the Oculus Quest uses a heavily customized Android distro, Babylon Native’s build system will probably just perceive this as “Android,” then go looking for ARCore instead of OpenXR. Hopefully it won’t be too long before we start working to de-Windows-ify our OpenXR integration (and community interest will definitely help accelerate that), but there’s quite a bit of work that will have to be done before I would expect Babylon Native to run correctly on an Oculus Quest. Oculus Rift might require a little less work, so that might come first, but I still wouldn’t really expect it to work today, unfortunately. It’s on the radar, though, so it’s definitely coming!

6 Likes

That is good enough for me at present. The # of users of both HoloLens2 & BJS, outside of Microsoft itself, has to be microscopic. That is good for bootstrapping, until it’s not. Your teams call. I was slightly playing dumb in that I knew the answer before I asked the question. You post the source code.

One comment on about Rift. I would not waste a second that you do not have to on it. Tethered VR is on the way out, and as the “Great One” said “Do not skate to where the puck is. Skate to where it is going to be.”


It just running on a Quest as a standard 2D Android app is both confidence building & useful for my own bootstrapping. Though I have been messing around for years, in Sept '19, I started working on a specific VR product. I did not consider any device on the market at the time to be worthy, so I concentrated on the tools which I would need, Voice / font tech & IK animation / editor built directly into JS, as well as the mesh objects needed.

With Quest2 & one in my actual possession, I am now making sure that anything I made can be deployed. It is a big mistake to wait till the end to do that. The options are either to BJS Native, or Chromium. I prefer the former, but there / were are 2 major gaps:

1- User interface without Canvas 2D font support.
2- No WebAudio support.


UI

It has taken me about 3.5 months, but I now have eliminated this gap with a UI that is completely mesh based. It has a 3 sided main “portal” for controls, which you can literally summon / position with the snap of your finger (hand tracking), and dismiss when not needed. There are also small arm surfaces that hold 3 buttons / controls each, for frequent needs.

BTW, I saw a recent PR on handtracking for BJS Native. Is handtracking currently operational?


WebAudio

I “talked” about doing a BJS Native plugin for WebAudio in the past. Yesterday, I actually started development. Taking into account:

  • WebAudio has a well defined API.
  • The system I was going to wrapper, LabSound, was originally a fork of Webkit’s implementation.

I am trying, as a first step, to write the entire WebAudio plugin, with all planned objects, methods, arguments & returns, but not actually do anything inside the calls. Sort of get all the pipes laid including installation into a native repo script, building, and running / test code. Once that was out of the way, then add the actual functionality.

I am not sure about asking c++ questions on a JS forum, but except for this one here, will break it up into separate topics on specific things.

The question is: Is this the best template I should be using to wrapper the context object & all of the audio node objects?
node-addon-examples/6_object_wrap/node-addon-api at main · nodejs/node-addon-examples · GitHub

Here is my mock up of the h file for a context

#ifndef AUDIO_CONTEXT_H
#define AUDIO_CONTEXT_H

#include <napi.h>
#include <LabSound.h>

#include <AnalyserNode.h>
#include <BiquadFilterNode.h>
#include <Buffer.h>
#include <BufferSource.h>
#include <ChannelMergerNode.h>
#include <ChannelSplitterNode.h>
#include <ConvolverNode.h>
#include <DelayNode.h>
#include <DynamicsCompressorNode.h>
#include <GainNode.h>
#include <OscillatorNode.h>
#include <PannerNode.h>
#include <ScriptProcessorNode.h>
#include <StereoPannerNode.h>
#include <WaveShaperNode.h>
#include <decodeAudioData.h>

#include <MediaElement.h>
#include <MediaStream.h>
#include <MediaTrack.h>

/**
 *  Wrapper for a realtime, as opposed to offline, audio context
 */
class AudioContext : public Napi::ObjectWrap<AudioContext > {
public:
	static Napi::Object Init(Napi::Env env, Napi::Object exports);
	AudioContext(const Napi::CallbackInfo& info);
	static Napi::Object getDefaultAudioContext(const Napi::CallbackInfo& info);

private:
	// methods of BaseAudioContext class
	Napi::Object CreateAnalyser          (const Napi::CallbackInfo& info);
	Napi::Object CreateBiquadFilter      (const Napi::CallbackInfo& info);
	Napi::Object CreateBuffer            (const Napi::CallbackInfo& info);
	Napi::Object CreateBufferSource      (const Napi::CallbackInfo& info);
	Napi::Object CreateChannelMerger     (const Napi::CallbackInfo& info);
	Napi::Object CreateChannelSplitter   (const Napi::CallbackInfo& info);
	Napi::Object CreateConvolver         (const Napi::CallbackInfo& info);
	Napi::Object CreateDelay             (const Napi::CallbackInfo& info);
	Napi::Object CreateDynamicsCompressor(const Napi::CallbackInfo& info);
	Napi::Object CreateGain              (const Napi::CallbackInfo& info);
	Napi::Object CreateOscillator        (const Napi::CallbackInfo& info);
	Napi::Object CreatePanner            (const Napi::CallbackInfo& info);
	Napi::Object CreateScriptProcessor   (const Napi::CallbackInfo& info);
	Napi::Object CreateStereoPanner      (const Napi::CallbackInfo& info);
	Napi::Object CreateWaveShaper        (const Napi::CallbackInfo& info);
	Napi::Object decodeAudioData         (const Napi::CallbackInfo& info);

	// methods specific to AudioContext, but not an offline conxtext
	Napi::Object createMediaElementSource(const Napi::CallbackInfo& info);
	Napi::Object createMediaStreamSource (const Napi::CallbackInfo& info);
	Napi::Object createMediaTrackSource  (const Napi::CallbackInfo& info);
	void resume();
	void suspend();
};

#endif

Sounds like this’ll be a pretty awesome project!

I haven’t tested it myself—the contributors who are writing that code right now are spearheading that—but my understanding is that hand-tracking is operational, though the OpenXR integration is only in for Windows devices so far. In theory, this means that most of the work to get hand tracking for other OpenXR devices like the Quest should already be done, too: all that’s left should be the work mentioned above to make the OpenXR integration not Windows-specific.

I think of this more as a Babylon forum than a JS-specific forum—WebGL and art questions are common, too—so a C++ question about Babylon Native makes perfect sense to me. :smiley:

Actually, partly in response to your own feedback in the prior thread you linked to, we have since codified and documented a quick way to create and incorporate new components to extend Babylon Native. That’s is the way I’d recommend starting your new WebAudio plugin as it will allow you to have it in its own repo, developed independently from but built together with Babylon Native itself; that will hopefully make your short-term workflow quite a bit easier, and we can always bring the code into the main repository later if we decide that’s the right thing to do. (We’ve also added some pretty substantial documentation about components as well as other parts of the Babylon Native architecture, in case that’s useful to you.)

Regarding the header itself, it looks like a fine start to me, but I’m not sure who would be consuming that header. As a model for your new plugin, you might want to take a look at NativeCapture, which is a pretty recent and modern addition exhibiting our current best practices for component design. A few features worth noting:

  • The public contract header exposes a very minimalistic API, requiring very little from C++ consumers.
  • The N-API type actually has no header exposure whatsoever; it doesn’t need any because this isn’t intended to be consumed through C++, but through JavaScript. The Console polyfill provides an example of how you can do this and still have a header if you want to; the header just isn’t exposed through the public contract.
  • The CMakeLists.txt is mostly boilerplate, but this is where you bring in the external dependencies you want to wrap. This the least C++ish part of Babylon Native, so the compiler unfortunately won’t help with this, but most of the CmakeLists.txt files in Babylon Native should be modeling good patterns to follow, and we do have at least some documentation about this.

Ok, I started looking at my extension options that you’ve provided for over the weekend. Think the template method is the best / lowest drama. I do not wish to become a cmake expert.

I already have a repo, just a readme.md really, but to my surprise it already has a watcher, so will just move anything over rather than fork to not loose whoever that is. Your repo is MIT & I am Apache 2. Will just add you license file, changing the name of the file.

Unless some of this can be done in CMake.txt, I think just need to this add to the front of my readme.md:

  • Clone this repo adjacent to the Babylon Native repo.
  • Change to the bjs native repo directory & run this once:
cd Dependencies
git clone https://github.com/LabSound/LabSound.git

cd ..
echo [submodule "Dependencies/LabSound"]>>.gitmodules
echo path = Dependencies/LabSound>>.gitmodules
echo url = https://github.com/LabSound/LabSound.git>>.gitmodules

git submodule update --init --recursive
  • Everytime you need to call cmake do so with the following syntax
cmake -D EXTENSIONS_DIRS="../BabylonNativeWebAudio"

BTW, when do you need to call cmake, or is it never with Android / AndroidStudio, which does it for you?


I will now start to look at your references to capture as a template. The context.h I posted above was primariliy to hold all the includes of each of the different headers of all the components, and to document my own understanding. I want each node type to have its own file, so each should have its own header. Was planning context.cpp to be the consumer of this header.


Android is my must have for webAudio, just as OpenXR is your must have for Hololens. Combine that with the statement in the Labsound readme that they support Windows 10 &

In the past, LabSound has been demonstrated to work on iOS, Android, and Linux via JACK.

This dictates that I am going to be doing all my dev & testing on a Quest. If there are problems, I want to ASAP. This is what I meant last post about bootstrapping from Oculus.

As I also have a BJS-Native capable UI, I am just going to have a scene that turns into a console for logging. I have not really seen where in the directory labyrinth to actually put all the js files need for my secene.

Have found that Babylon.max.js somehow gets there, but what if I do not want max, or preview not production, or specific version? Will it keep coming back, if I delete it?

Also, I even noticed this when I did a windows build last year, how do you make the window bigger? I want to be able to read the console / scene. It could be bigger & wider on a Quest.

There seems to be all the focus on building, but nothing on inserting what it is you actually want to do, namely run a scene.

I realized, after posting this time, that if AndroidStudio does the cmake, then how can plugins using the “extension template” be done on Android?