We’re trying to improve the speed of generating screenshots and notice that there’s a method to read the pixels directly from the WebGL context instead of drawing to a canvas.
Is there any advantage to using this instead? Maybe saving the copying of buffers if we’re trying to generate lots of screenshots per second?
The advantage is that you can use if after a render target texture has been generated, so it does work not only for the final canvas buffer but also for intermediate rendering (for eg, in a RenderTargetTexture.onAfterRenderObservable observer). Also, it works for WebGPU, whereas for the time being WebGL canvas can’t be read in WebGPU (though it should be at some point).
It should have better performances/behaviour (won’t stall the rendering) because it is asynchronous, whereas reading the canvas is synchronous.