Context : At my company Global Liquidity we have been building a 3D trading platform. Lately, we have been prototyping a WebXR enabled trading experience.
Along the way, I realized how great it would be to be able to render existing HTML charts and other visualizations within the VR environment.
For example, there is this great charting package which renders custom charts to an HTML5 Canvas.
See the examples at the bottom of that page - how they are interactive, supporting mouse drag etc.
After one day of hacking on this, Iāve got a TradingView chart rendering in VR on my Oculus Quest!
I tried to make a playground for this but got hung up referencing the third party library, so Iāll explain how Iām doing it this far.
Iām using my GLSG āScene Graphā framework, so the screen is a SceneElement. We use a sort of MVP pattern. The SceneElement is the visual thing in the 3D world. It looks at its SceneElementPresenter to get the updated data it is supposed to render. The data comes from an ActiveModel (The thing which pushes data updates either on a timed interval or else asynchronously when getting data from a websocket).
Here is the Chart2DData source which extends ActiveModel.
First we create an html document element which the chart library will use as the attachment point for the chart it will render. Note that we do not attach it to the document so it will be rendered invisibly.
We then use the Trading View Javacript library to create, configure and render the chart. Here we add sample data, but we could also call to a web service to get the data.
Once configured, we use the chart.takeScreenshot() feature which returns an HTMLCanvasELement.
We set this as the data object and update the Chart2DPresenter with this data.
Now letās looks at the Chart2D element itself which renders the chart using the most up to date data in the presenter.
First take note of line 53, because this is keyā¦
chartImage.src = chartCanvas.toDataURL();
This ātoDataURL()ā return a base 64 encoded PNG image representing the current state of the canvas.
Take a look at onPreRender(). Every time the presenter gets new data, (an HTMLCanvasElement) do do the following :
- Create an HTMLImageElement
- set this images .src property to be the output of calling chartCanvas.toDataUrl();
- The in the images onLoad() we get the rendering context of our own DynamicTexture and draw the contents of the chart image we created, onto our texture.
Note that currently, the Chart2DDataSource runs on a timer⦠making a new chart and pushing it to the presenter once a second. I imagine this update loop could work at 60FPS, but have not tested.
Ok, so far, pretty cool, I think.
First question for the community. Am I missing something obvious here which means there is a more direct way to do this? For example, since the DynamicTexture rendering context appears to be a HTML5Canvas, can I set up the chart package to render directly to that canvas, skipping the image copy.
Next question. How should I go about getting full interactivity on these canvases? Take a look at those example charts again. See how mouse movement over them causes all sorts of interactivity. How can we bring that interactivity into the 3D/VR world? We can intercept the click coordinate on the render surface and then transform that to a coordinate on the chartās native HTMLCanvasElement. Can we use javascript to transfer that mouse/touch info to the canvas so that the canvas would update and the updated images will be streamed back into the 3D world using the current technique? I suspect we can.
Does anyone feel inclined to help on this? I intend to open source all of it as part of my open sourced GLSG framework.
I think that this ability to bring in arbitrary web based content into 3D/VR is a key buildingblock of many ākiller appsā that are waiting to be built.