Using AMD FSR with babylon.js


  1. On making a page to work with some high-resolution screens, we found that the gpu memory is too easy to run out, the setHardwareScalingLevel can reduce the use of gpu memory by making the scene really blur.
  2. AMD FidelityFX Super Resolution (FSR) is a spatial upscaling technology that allows games to be rendered at a lower resolution and then upscaled to a higher resolution without a significant loss in image quality.

Existing work:

  1. The homepage of FSR
  2. The FidelityFX-FSR github repo
  3. A blog article about fsr
  4. shader codes on shadertoy:
  5. based on three.js
  6. based on

Possible ways:

  1. Render the scene to a RenderTargetTexture, with a lower resolution than the canvas, maybe 1/2 ~ 1/4
  2. Use FSR shader to upscale the RenderTargetTexture and render the upscaled result to canvas, instead of rendering the scene to canvas.


  1. Is what described above possible is babylon.js?
  2. Are events affected? For example mouse or touch inputs, ray picking, and GUI.
  3. Can Post Processes work with this? Or can this be a part of Post Processes?
  4. Is this possible with both WebGL and WebGPU engine?
  5. Can gpu memory footage reduced with this?
  6. Would fps reduce or increase if gpu memory is the bottleneck?
  7. How many draw calls would this use in addition to rendering the original scene?
  8. Are there anything else affected by this?

Possible alternatives:

As it seem to be a simple shader pass, it seems totally doable in babylonjs by creating a post process.

1 Like

Thx for your reply. Can a post process make the scene rendering to a lower resolution while output as higher resolution? Can gpu memory footage reduced with this?

You can add a pass-through post-process to render the scene at a reduced resolution, then add the upscaler post-process to generate the final output at the desired resolution.

For example, to render to a texture at half the screen resolution:


theres a bunch of shaders that may be useful here Anime4K/glsl at master · bloc97/Anime4K · GitHub

I linked those specifically because its meant for real time upscaling. Also, i think the nvidia and amd upscalers have some trained models on the hardware drivers which encode scene information as hints into the output image to run through the models. I think the ray tracer denoisers work like this too. So to implement that in babylon, i’d imagine adding some information in vertex attributes that get encoded into the render output, which you then use to train a ML upscaler. Probably not a reasonable goal imo.

I think the shader upscaling is not too different than anti aliasing passes. also babylon even has a couple “sharpen” effects that could be a good foundation for this. see Search Page | Babylon.js Documentation . But, the real question is would this provide a benefit? It may be that you are just downsampling the render with the high quality assets still sitting in the gpu memory. It seems like that could cause more overhead not less, could totally be wrong on this though.