How can we best bring an interactive HTML5 Canvas into 3D/VR? Check out my first steps

Context : At my company Global Liquidity we have been building a 3D trading platform. Lately, we have been prototyping a WebXR enabled trading experience.

Along the way, I realized how great it would be to be able to render existing HTML charts and other visualizations within the VR environment.

For example, there is this great charting package which renders custom charts to an HTML5 Canvas.

See the examples at the bottom of that page - how they are interactive, supporting mouse drag etc.

After one day of hacking on this, Iā€™ve got a TradingView chart rendering in VR on my Oculus Quest!

I tried to make a playground for this but got hung up referencing the third party library, so Iā€™ll explain how Iā€™m doing it this far.

Iā€™m using my GLSG ā€œScene Graphā€ framework, so the screen is a SceneElement. We use a sort of MVP pattern. The SceneElement is the visual thing in the 3D world. It looks at its SceneElementPresenter to get the updated data it is supposed to render. The data comes from an ActiveModel (The thing which pushes data updates either on a timed interval or else asynchronously when getting data from a websocket).

Here is the Chart2DData source which extends ActiveModel.

First we create an html document element which the chart library will use as the attachment point for the chart it will render. Note that we do not attach it to the document so it will be rendered invisibly.

We then use the Trading View Javacript library to create, configure and render the chart. Here we add sample data, but we could also call to a web service to get the data.

Once configured, we use the chart.takeScreenshot() feature which returns an HTMLCanvasELement.
We set this as the data object and update the Chart2DPresenter with this data.

Now letā€™s looks at the Chart2D element itself which renders the chart using the most up to date data in the presenter.

First take note of line 53, because this is keyā€¦

chartImage.src = chartCanvas.toDataURL();

This ā€˜toDataURL()ā€™ return a base 64 encoded PNG image representing the current state of the canvas.

Take a look at onPreRender(). Every time the presenter gets new data, (an HTMLCanvasElement) do do the following :

  • Create an HTMLImageElement
  • set this images .src property to be the output of calling chartCanvas.toDataUrl();
  • The in the images onLoad() we get the rendering context of our own DynamicTexture and draw the contents of the chart image we created, onto our texture.

Note that currently, the Chart2DDataSource runs on a timerā€¦ making a new chart and pushing it to the presenter once a second. I imagine this update loop could work at 60FPS, but have not tested.

Ok, so far, pretty cool, I think.

First question for the community. Am I missing something obvious here which means there is a more direct way to do this? For example, since the DynamicTexture rendering context appears to be a HTML5Canvas, can I set up the chart package to render directly to that canvas, skipping the image copy.

Next question. How should I go about getting full interactivity on these canvases? Take a look at those example charts again. See how mouse movement over them causes all sorts of interactivity. How can we bring that interactivity into the 3D/VR world? We can intercept the click coordinate on the render surface and then transform that to a coordinate on the chartā€™s native HTMLCanvasElement. Can we use javascript to transfer that mouse/touch info to the canvas so that the canvas would update and the updated images will be streamed back into the 3D world using the current technique? I suspect we can.

Does anyone feel inclined to help on this? I intend to open source all of it as part of my open sourced GLSG framework.

I think that this ability to bring in arbitrary web based content into 3D/VR is a key buildingblock of many ā€œkiller appsā€ that are waiting to be built.

Hello and welcome!

I donā€™t know but if possible this is an excellent idea!

For interactivity, I think you could capture the events with scene.onPointerObservable and then send them back to your chart canvas. It is actually possible to create ā€œfakeā€ mouse events. This is how I did it for my pointer events polyfill: https://github.com/deltakosh/handjs/blob/master/src/hand.base.js#L82

Hey thanks for the pointer on mouse events. :wink:

Iā€™ll dig in to Dynamic Texture and see what I can find.

Imagine all the possibilities here for VR UI design here. I think one of the best uses for a VR space is as a place to composite 2D content while in a 3D environment. For example, Bigscreen (for Oculus/Vive) letā€™s you bring your own desktop computer screen into VR where you project it onto a wall and where others can log in and see it with you.

I want to make a generalized solution for compositing 2D HTML content starting with HTM5Canvas, into 3D/VR scenes. Iā€™d love to have access to all of d3.js within VR. Imagine a procedural skydome using d3. d3 renders to SVG so can probably render at very high detail.

1 Like

Love the idea!

1 Like

I would drop SVG and stick with a chart library that renders to canvas (like D3) - then you just pop that canvas onto a DynamicTexture to a mesh/plane - Warning itā€™s hard on the eyes :):
https://playground.babylonjs.com/#BTZDET

You were close using a DynamicTexture yourself, but the second parameter can be a DOM canvas - realize that is not obvious from the any parameter. Would be better as (number | HTMLCanvasElement) maybeā€¦

I wrote a real time trading platform (some 3d) for my last job - it was for a sports betting syndicate though - not stocks. We did a lot of graphing across markets and real-time sockets. I was really impressed by your demo. Iā€™ve seen one a few years back using babylonjs as a 3D stock ticker, but what you have showing calls on each side and walls of resistance was highly cool. Thanks for sharing. Great work!

Hey thanks for replying.

The reason I thought about svg is that a quick look at a few d3.js examples shows code where they are generating an svg. Then I realized itā€™s possible to draw an svg onto a canvas. Maybe d3 can render directly to an image, and that will be more direct, but still, there could be some cool cases, in VR, for example, where we might want to render something at 8K or even 16K, and it would be interesting to see if we can do that in realtime, and maybe SVG has advantages there, Iā€™m not sure.

Oh that is cool that youā€™ve done something at all relating to markets and using 3D of any sort. I am amazed that 3D tech has not yet been adopted, en masse, in the world of trading and markets. I know for sure that I am the only one (as CEO of my company Global Liquidity) working on a fundamentally new electronic trading interface paradigm. Everyone else out there, due to market forces, has been driving into this sort of cold war of UX where everyone copies the standard and no one innovates because they want to be able to attract customers that are already familiar with these old style 2D interfaces.

The underlying factor here is that non-gaming industries, especially finance, have never taken 3D graphics seriously as a tool. ā€œEveryone knows that 3D graphics are for gamesā€¦ā€.

Right.

The reality is that these GPUs are massively parallel supercomputers specifically suited to delivering a massive amount of real-time data, directly into oneā€™s brain, via the visual cortex.

If you ever have a need to work with realtime-data and 3D together, give a look at the open source. framework I have developed on top of Babylon.js called GLSG. [Itā€™s on GitHub but I donā€™t publicly announce that yet - contact me if you, or anyone else reading this, wantā€™s to play with it].

Have you seen the most recent iteration of our 3D order book? Iā€™ve been optimizing it a lot and now we get 60FPS or close to it on most modern devices. Well over a 5x increase from before.

See that big field of boxes? Thatā€™s a single SolidParticleSystem. Actually itā€™s a GLSG construct called a VectorField. Itā€™s a two-dimensional SolidParticleSystem suited to binding to many different types of data. Weā€™ve even made a custom PBRMaterial that uses lookups into a bitmap color palette so that we can have many colored objects in a single VectorField.

This is the idea of GLSG, that we will build all these optimizations in so the end user wonā€™t need to think about them.

Anyway, if you canā€™t tell, Iā€™m very excited about all of this and I just wanted to share some of itā€¦ :slight_smile:

3 Likes