Webdevelopment for WebVR and WebAr

Hi, im working on my bachelor thesis "three dimensional visualisation (AR and VR) in a webbrowser.
I looked throw a lot of Frameworks and APIs, but im not sure which one to use.
I need a Framework that can work with both augmented and virtual reality, preferably markerless.
Need Collision detection ,the possibility for an walkingsimulator, and easy interaction (not so important)
The best Frameworks that i found, till now, are Babylon.js and A-Frame, in combination with ar.js.
Myself prefer Babylon.js, but there is no real ar implementation.

Also found 8thwall, but i dont want to pay or rely on the website to host everything.

Im also looking into Opencv.js and its really useful and deep, but not sure to start my own groundplane tracking when there are some already working.
Would love to hear other opinion and other frameworks that works well.

Thanks and sorry for my english im from germany :smiley:

1 Like

Definitely adding @syntheticmagus here would did a great post about a brand new (still in alpha) project: BabylonAR : Babylon AR - Babylon.js - Medium

1 Like

hi thanks, i have seen the babylon.js AR section, but they ar markerbased (aruco or hiro marker)
Also did some tests with Babylon.js and ar.js by jerome_etienne thats working fine with marker, but had no updates since 2 years, so im not sure if there are other newer solution. Other tests that i made were working with WebXR but i cant find any Augmented Reality support only VR.

1 Like

I am pretty sure @syntheticmagus could give us a SLAM update, as it is smthg he was looking at.

1 Like

[Thanks for ping, sebavan!]

Hi player11,

Babylon AR (or Babylon CV; we’re working on it :smiley:) is brand new this year. To your point about Babylon.js having “no real ar implementation,” Babylon AR is directly intended to fill that gap. You are correct that the current capabilities are built around marker tracking, mainly because we could stand up that capability very quickly. We plan to keep adding new capabilities to it, though; and as sebavan has suggested, SLAM (marker-less world tracking) is on the list of technologies we’re looking into.

Scanning up the conversation a bit, it sounds like you’ve picked a very interesting, if challenging, bachelor’s thesis. How quickly do you need to move forward on this? If you have sufficient time, as I said, we are hoping to add new features to Babylon AR in the near future. However, if you need to move forward very quickly and the capabilities you need aren’t available yet, I would encourage you to either look for ways to use marker as a stand-in solution or to go with something like 8th Wall. Marker-less tracking is a difficult problem at the best of times; and, given that and the other work streams currently in flight, I unfortunately can’t make any guesses about when Babylon AR might start adding such advanced features.

I can think of one other option if you really want to go deep on this (and I mean deep; the following work could very easily make for an impressive bachelor’s thesis all on its own). As tricky as marker-less tracking is, there are open-source implementations out there. They’re complicated, and so far as I know there aren’t any built for Web – but that is something that, if you wanted to, you could attempt. Using Emscripten and the workflow we’ve developed for Babylon AR, you could try to adapt something like ORB-SLAM 2 to build to WebAssembly and run on the Web; that would give you a real, open-source SLAM that you could use from a JavaScript engine like Babylon. That would probably take an astonishing amount of work, though, including working with hot-off-the-presses browser technologies like WASM threading. I can’t really recommend that you tackle something like that, particularly since you might be able to make do with other existing technologies. I only bring it up because it is a possibility. (And, honestly, because I think it’s really cool. :smile:)

Hope some of this was helpful/informative, or at least interesting. Good luck with your thesis!


Hi player11, I worked with jerome_etienne 2yrs ago to first attempt ar.js integration with BABYLON. After completing the first BJS AR Proof of Concept. Jerome and I met online, and he improved the 1st solution shortly after. It was a great experiment.

There was an important (subtle) finding that (should) be useful to your thesis.

And that is a principle about AR Purpose, not just novelty.

1st if you check my avatar. It is a forestfire in AR.

We put the “marker” on the table and and were about to visualize 3D adventure games there.
But then realized something simple but important…

what is the purpose of the Camera Background?

There was none.

We realized it was COOL to see a battle in your living room. But equally cool… to see it in space!
And that reduced the problem complexity drastically.

So we see that AR is a great novelty, but that the camera background needs a clear purpose.

Good AR is more than novelty when the camera has a clear purpose to the user.

Ask yourself: what does the user need to see the “Real World View”??

What purpose does the “Real World View” give to the “3D World View” (and vice versa)

If there is no purpose, consider the option to just make the background stars.

On the flipside, the more PURPOSE you can infuse between “Real World View” and “3D World”, the more that seems to be a good AR use-case to us.

Just a thought… : )

1 Like

Thanks a lot, its very helpful. i think babylon is doing a great job and love to see new features.
I have like 3 month left for my thesis, but i think when im done, im going to look farther into it, its so interesting. Im going to have a look at ORB-SLAM 2 sounds intresting and useful in the future . I also looked into 8th wall, which is running on Slam, but it didnt feel like real groundplane detection. its not so stable and i dont know why they dont need to scan or see the ground, its more like geotracking without asking gps permission :smile: . One the other hand its to good, that i dont think its only geotracking.

I also found some examples of plane detection by point cloud segmentation, and it looked so good, dont think that i can work with it in my thesis but its very intressting in the future.

Im going to continue my Bachlor quest and when nothing helps i stick to the marker or use 8th wall, thanks again for the detailed reply :smiley:

hi, thanks for your reply, its funny some days ago i came across your forestfire.

Its an intresting thesis, if its okey i would love to use it in my bachlor thesis.
Its also the question of immersivity, if there is a fight in the space without any implementation of the real word is it still AR or is it VR, but on the other hand its getting the virtuell world into the real world, its hard to really define if its more VR or AR.

Thanks a lot for your relpy :smiley:

1 Like

Sounds like a good plan! As a side-note about 8th Wall, the instability might minimize after a few seconds of using the app and moving the camera. Without going into too many specifics, the problem of initializing a monocular SLAM without a depth camera is under-constrained (i.e., there isn’t enough information available to actually solve for all the variables) until the camera moves around a bit, so many commercial SLAMs begin by relying on some variation of inertial odometry until they can gain enough information about the world (from a moving camera) to do real visual tracking. Consequently, things can appear noticeably unstable until the algorithm “gets tracking.” Ground plane tracking is a slightly different technology altogether – you can have a SLAM without having ground plane tracking, or any kind of plane detection at all – but I don’t know enough about 8th Wall’s technology to speculate intelligently about how/what their solution does under the hood.


Yes, please use the concept. Value question for AR:

What value does the “Real World View” give the user?


Why must the background be RWV when it can easily be skybox?

Perhaps when we find that compelling reason it might indicate a compelling AR use-case.

1 Like

Ahh, this is really cool, im been looking into SLAM and it looks very promising for the future. The point cloud based recognition of the surrounding world, and creating, a virtuel
feature based world looks really good. Its only sad that there is no javascript implementation. Afterlooking in WebAssembly, i come across Emscripten, which seems to be able to compile C++ and java to javascript. Does it mean i can write a C++ or Cpython script and let it be compiled to javascript ? It seems a little bit too easy :sweat_smile:, but thats only a theortical view, i will check how the compiling works, how big the compiled file is compaired to the original and some other things, im dirfting off alittle bit from my original thesis, but i can also use this information in my thesis, i hope so

Sorry for keep asking, but its such an intressting subject :blush:

1 Like

this is true, normally it seems the real world is just a funny addon. in many cases its only for show, “look its in the real world”, without any interaction bewteen, real world and augmented content. Thats some good writting material thanks again :smiley:

1 Like

I agree, SLAM is a very cool problem, and it is unfortunate that there isn’t an open-source one readily usable from JavaScript. (Yet. :wink:) You are correct about being able to use Emscripten to bring C++ libraries to the Web. (Not sure about Java, I don’t think LLVM has a Java front-end.) Emscripten can compile to ASM.js (a fast subset of JavaScript) or WebAssembly (an even faster alternative binary format). If you’re interested in learning more about that stuff, we have two blog posts specifically about using Emscripten to bring native computer vision libraries to the Web; one was linked to above by sebavan, and the other can be found here. Neither are anything close to a thorough treatment of the topic, but between those posts and the BabylonAR code, there should be enough information to provide at least a decent overview.


Sounds promising, i have a look at both links, thanks a lot, babylon.js has a really good supporting community :smiley:

1 Like

Hi, its me again :smiley:

i wanted to know, if its possible to integrate the Google Scene/Model -Viewer or Apples Quick Look into Babylon for Augmented Reality.
Could it be possible to do some simple interaction with Babylon Gui or javascript buttons ?
I found this example of an AR Boardgame which seems to run by apples quick look.

Tsuro AR:

But i cant find anything about how it could be done.

Would like to hear some opionions.

Didnt know if i should open a new Question so i keept it in my old one.

1 Like

Dear all,

any news about SLAM integration and image/object tracking to Babylon AR?


Pinging our friend @syntheticmagus :slight_smile:

And @RaananW for the amazing WebXR offering.

Hi Papagiotis_Papadakos,

Regarding indirect SLAM integration (through WebXR), @RaananW is definitely the expert, but Babylon.js always stays on the leading edge of that. :smiley: The latest news about direct SLAM integration isn’t particularly recent, but earlier this year the Babylon team helped Microsoft open-source the MAGE-SLAM monocular SLAM algorithm. We have some ideas for how we’d like to progress from here, but those ideas aren’t attached to a timeline at this point.

Regarding image/object tracking, we released a demo showing that capability from Babylon AR earlier this year; that’s probably the most recent news we’ve shared specifically on that topic. Hope that helps!


Babylon has a few WebXR features implemented that might provide you with (part of?) what you are looking for - hit tests, anchors, and plane detection. The underlying system (Android’s AR layer) is the one responsible for the actual SLAM implementation, and we are taking whatever the system can provide us with. You can read about the features here - WebXR Augmented Reality Features - Babylon.js Documentation . We will slowly add new features as they are added to the specs. If you need more advanced features (that are currently not available in WebXR), @syntheticmagus’s answer is the one you should be reading :slight_smile: