That is good enough for me at present. The # of users of both HoloLens2 & BJS, outside of Microsoft itself, has to be microscopic. That is good for bootstrapping, until it’s not. Your teams call. I was slightly playing dumb in that I knew the answer before I asked the question. You post the source code.
One comment on about Rift. I would not waste a second that you do not have to on it. Tethered VR is on the way out, and as the “Great One” said “Do not skate to where the puck is. Skate to where it is going to be.”
It just running on a Quest as a standard 2D Android app is both confidence building & useful for my own bootstrapping. Though I have been messing around for years, in Sept '19, I started working on a specific VR product. I did not consider any device on the market at the time to be worthy, so I concentrated on the tools which I would need, Voice / font tech & IK animation / editor built directly into JS, as well as the mesh objects needed.
With Quest2 & one in my actual possession, I am now making sure that anything I made can be deployed. It is a big mistake to wait till the end to do that. The options are either to BJS Native, or Chromium. I prefer the former, but there / were are 2 major gaps:
1- User interface without Canvas 2D font support.
2- No WebAudio support.
UI
It has taken me about 3.5 months, but I now have eliminated this gap with a UI that is completely mesh based. It has a 3 sided main “portal” for controls, which you can literally summon / position with the snap of your finger (hand tracking), and dismiss when not needed. There are also small arm surfaces that hold 3 buttons / controls each, for frequent needs.
BTW, I saw a recent PR on handtracking for BJS Native. Is handtracking currently operational?
WebAudio
I “talked” about doing a BJS Native plugin for WebAudio in the past. Yesterday, I actually started development. Taking into account:
- WebAudio has a well defined API.
- The system I was going to wrapper, LabSound, was originally a fork of Webkit’s implementation.
I am trying, as a first step, to write the entire WebAudio plugin, with all planned objects, methods, arguments & returns, but not actually do anything inside the calls. Sort of get all the pipes laid including installation into a native repo script, building, and running / test code. Once that was out of the way, then add the actual functionality.
I am not sure about asking c++ questions on a JS forum, but except for this one here, will break it up into separate topics on specific things.
The question is: Is this the best template I should be using to wrapper the context object & all of the audio node objects?
https://github.com/nodejs/node-addon-examples/tree/main/6_object_wrap/node-addon-api
Here is my mock up of the h file for a context
#ifndef AUDIO_CONTEXT_H
#define AUDIO_CONTEXT_H
#include <napi.h>
#include <LabSound.h>
#include <AnalyserNode.h>
#include <BiquadFilterNode.h>
#include <Buffer.h>
#include <BufferSource.h>
#include <ChannelMergerNode.h>
#include <ChannelSplitterNode.h>
#include <ConvolverNode.h>
#include <DelayNode.h>
#include <DynamicsCompressorNode.h>
#include <GainNode.h>
#include <OscillatorNode.h>
#include <PannerNode.h>
#include <ScriptProcessorNode.h>
#include <StereoPannerNode.h>
#include <WaveShaperNode.h>
#include <decodeAudioData.h>
#include <MediaElement.h>
#include <MediaStream.h>
#include <MediaTrack.h>
/**
* Wrapper for a realtime, as opposed to offline, audio context
*/
class AudioContext : public Napi::ObjectWrap<AudioContext > {
public:
static Napi::Object Init(Napi::Env env, Napi::Object exports);
AudioContext(const Napi::CallbackInfo& info);
static Napi::Object getDefaultAudioContext(const Napi::CallbackInfo& info);
private:
// methods of BaseAudioContext class
Napi::Object CreateAnalyser (const Napi::CallbackInfo& info);
Napi::Object CreateBiquadFilter (const Napi::CallbackInfo& info);
Napi::Object CreateBuffer (const Napi::CallbackInfo& info);
Napi::Object CreateBufferSource (const Napi::CallbackInfo& info);
Napi::Object CreateChannelMerger (const Napi::CallbackInfo& info);
Napi::Object CreateChannelSplitter (const Napi::CallbackInfo& info);
Napi::Object CreateConvolver (const Napi::CallbackInfo& info);
Napi::Object CreateDelay (const Napi::CallbackInfo& info);
Napi::Object CreateDynamicsCompressor(const Napi::CallbackInfo& info);
Napi::Object CreateGain (const Napi::CallbackInfo& info);
Napi::Object CreateOscillator (const Napi::CallbackInfo& info);
Napi::Object CreatePanner (const Napi::CallbackInfo& info);
Napi::Object CreateScriptProcessor (const Napi::CallbackInfo& info);
Napi::Object CreateStereoPanner (const Napi::CallbackInfo& info);
Napi::Object CreateWaveShaper (const Napi::CallbackInfo& info);
Napi::Object decodeAudioData (const Napi::CallbackInfo& info);
// methods specific to AudioContext, but not an offline conxtext
Napi::Object createMediaElementSource(const Napi::CallbackInfo& info);
Napi::Object createMediaStreamSource (const Napi::CallbackInfo& info);
Napi::Object createMediaTrackSource (const Napi::CallbackInfo& info);
void resume();
void suspend();
};
#endif