Ahh, you have Firestorm and Singularity experience. Cool. I donāt. SO, yeah, at least SOME of the area/region around your current avatar positionā¦ needs to get sent in SOME formatā¦ to those clients.
I would thinkā¦ that approx the same data transfer speeds would be available in both Singularity, and with JS sockets. But, after that data arrives (possibly in separated chunks), it needs processing and then display. That processing would likely go very fast in Singularity (because it is OS code/app). Likely slower in JS.
Then, display. Singularityā¦ likely renders OpenGL or Direct3D, or similar high-capacity high-performance canvas. JS, installing the same processed data into webGL canvasā¦ maybe much much slower. And then run-time operations/collision-processing/physics? erf.
Can I askā¦ what makes you feel this way? Hope?
There is a reason (or many) why the U.S. Army bailed on the idea of a webGL OpenSim viewer, and why none exist yet, after many years of OpenSim dev. Perhaps you can find out that reasonā¦ on some OpenSim forums.
One interesting testā¦ might be to ask Firestorm and Singularity dev communitiesā¦ about using a webGL canvas for THEIR display system. I think both communities would reply-with āAre you out of your mind?ā They might question WHY they should consider changing from full-power OpenGL/DirectXā¦ to limited-power WebGL.
And even if they could/would try a āhybridā system like that, YOU are only part-way to your goal, with that action. Firestorm/Singularity would still be OS platform-specific (not cross-platform or lightweight). The same amount of displayer pre-process would be needed whether the client used an OpenGL canvas or a webGL canvas.
And after the loadā¦ chances are that the webGL canvas operates slower than the OpenGL canvas. Perhaps MUCH slower.
I dunno. Iām speculating, of course. It seems to meā¦ that there is little chance of perf improvementā¦ using a webGL render on current clients, or using an ALL-JS client with webGL (webpage). Ease-of-use improvement with OpenSim-on-a-web-browserā¦ definitely true. Would it be plausible/feasible/practical? I have doubts. But, what do I know?
The Unity-to-webGL systemā¦ is a worthy case-study, I would suspect. It converts SOMETHING into a webGL/JS file/scene. But it is not interactive with Unity after the export. This is what I thought aboutā¦ for 500blog. You would not be allowed to use the grid as-if you were using Singularity or Firestorm, but you could āseeā a āwebGL representationā of an OpenSim grid.
You might tell a server like 500blogā¦ āplease fetch grid blabblahfoo, and render-it in webGL, thanksā.
But what good/fun is that? I dunno. Maybe we could click on avatars and start chats with far-users, but they canāt see usā¦ unless 500blog starts āsimulatingā an OpenSim client. Likely severely-limited features, and slow JS event/collision/physics processingā¦ in the local webGL rendering (your view/scene).
Then, think about 500blogā¦ listening for ārequestsā arriving, such as āfetch gridā requests from āout thereā. 500blog would need to āpackageā its current BJS sceneā¦ making it look like a HyperGrid package,and then send it off to the requestorā¦ as if 500blog IS a HyperGrid server. ERF! I suppose 500blog would ACTUALLY BE a HyperGrid server, then, wouldnāt it?
Lots of work. High potential for disappointment at the end of the rainbow.
Know what I āfeelā? I feel I need to shut up for a while, and listen for comments from smarter people than I.