Ahh, you have Firestorm and Singularity experience. Cool. I don’t. SO, yeah, at least SOME of the area/region around your current avatar position… needs to get sent in SOME format… to those clients.
I would think… that approx the same data transfer speeds would be available in both Singularity, and with JS sockets. But, after that data arrives (possibly in separated chunks), it needs processing and then display. That processing would likely go very fast in Singularity (because it is OS code/app). Likely slower in JS.
Then, display. Singularity… likely renders OpenGL or Direct3D, or similar high-capacity high-performance canvas. JS, installing the same processed data into webGL canvas… maybe much much slower. And then run-time operations/collision-processing/physics? erf.
Can I ask… what makes you feel this way? Hope?
There is a reason (or many) why the U.S. Army bailed on the idea of a webGL OpenSim viewer, and why none exist yet, after many years of OpenSim dev. Perhaps you can find out that reason… on some OpenSim forums.
One interesting test… might be to ask Firestorm and Singularity dev communities… about using a webGL canvas for THEIR display system. I think both communities would reply-with “Are you out of your mind?” They might question WHY they should consider changing from full-power OpenGL/DirectX… to limited-power WebGL.
And even if they could/would try a “hybrid” system like that, YOU are only part-way to your goal, with that action. Firestorm/Singularity would still be OS platform-specific (not cross-platform or lightweight). The same amount of displayer pre-process would be needed whether the client used an OpenGL canvas or a webGL canvas.
And after the load… chances are that the webGL canvas operates slower than the OpenGL canvas. Perhaps MUCH slower.
I dunno. I’m speculating, of course. It seems to me… that there is little chance of perf improvement… using a webGL render on current clients, or using an ALL-JS client with webGL (webpage). Ease-of-use improvement with OpenSim-on-a-web-browser… definitely true. Would it be plausible/feasible/practical? I have doubts. But, what do I know?
The Unity-to-webGL system… is a worthy case-study, I would suspect. It converts SOMETHING into a webGL/JS file/scene. But it is not interactive with Unity after the export. This is what I thought about… for 500blog. You would not be allowed to use the grid as-if you were using Singularity or Firestorm, but you could “see” a “webGL representation” of an OpenSim grid.
You might tell a server like 500blog… “please fetch grid blabblahfoo, and render-it in webGL, thanks”.
But what good/fun is that? I dunno. Maybe we could click on avatars and start chats with far-users, but they can’t see us… unless 500blog starts “simulating” an OpenSim client. Likely severely-limited features, and slow JS event/collision/physics processing… in the local webGL rendering (your view/scene).
Then, think about 500blog… listening for “requests” arriving, such as “fetch grid” requests from “out there”. 500blog would need to “package” its current BJS scene… making it look like a HyperGrid package,and then send it off to the requestor… as if 500blog IS a HyperGrid server. ERF! I suppose 500blog would ACTUALLY BE a HyperGrid server, then, wouldn’t it?
Lots of work. High potential for disappointment at the end of the rainbow.
Know what I “feel”? I feel I need to shut up for a while, and listen for comments from smarter people than I.