This is an unusual question, but if we had a scene with two meshes, each of them cubes, and we wanted to screenshot them separately as PNG with alpha, we can just hide them in sequence and take a screenshot. But what if we wanted to only render the pixels that are not usually occluded by one cube over the other, given different camera positions? The idea is that these screenshots could be layered later in code as a single image. I think we could do this using a postprocessing shader but I haven’t really tried yet.
Expanding on the postprocessing idea a bit, you’d take the main render output, then run a render with everything through a simple color shader where they are either black or white. That could be the opacity input for the final render, which would combine the main render and the opacity render and hide everything not white. I’ve done something similar in ThreeJS but I’m not sure of the Babylon way.
Looks like someone posted a solve elsewhere: