I want to create a custom web app, where I have a 3D scene. Within that scene I want to be able to create meeting experience. Basically, let’s say I have a table in the scene, and who sits on the table can join to the Teams meeting.
Most of the resources I am seeing are referring to embedding the web app or website to Microsoft Teams app, but I want it the other way around (Teams into Website, where I have meeting feature in my app).
You can’t directly embed a website in a canvas experience. There are a few ways to achieve it - the first is to create an HTML overlay on top of your canvas, and transform it based on the camera. The main catch here is that this html layer doesn’t know about the webgl depth buffer - it will always be on top (or better yet - it will alwasy be presented according to its css definition).
Another way (the only way that will work in WebXR, for example), is to stream a browser as a texture to your experience and present it on a plane. The FrameVR devs are using this method to stream websites to their metaverse rooms. They are using a paid service (I sadly don’t remember its name).
what does embed mean to you? What does this action mean “…who sits on the table…” Do you have controllable characters in the scene? Do the other users control them? Do they choose to sit or not sit? Or are such characters just prebuild animations showing someone has joined the meeting?
How does your app and scene work in conjunction with the video feeds of MS Teams? What are you expecting to intergrate?
How can you approach this?
The Main aspect you need to research is: does MS Teams have an API? If it does, then start thinking about which aspects of their service you wish to leverage in your app and then see if these things are even exposed to the API. If they are , build prototypes testing such things using the most minimal 3d content possible.