Hello @Kevin
Yes I think it would be possible, but there might be some complications. I’m assuming you mean that all the scene’s triangles that are inside the camera’s viewport would be projected and rasterized in the traditional manner. Then, instead of calling the usual shadow map routines or screen-space ambient occlusion tricks, you could cast a ray based on each triangle’s surface normal and if the ray hits something, anything, it would keep that triangle slightly unlit or darker. If the ray instead doesn’t encounter any obstacles and hits the sky (or background), the triangle could be colored according to where it hits the sky (texture lookup). One could go further and have the returned light attenuated by the dot product between the outgoing skyward bound ray and its triangle’s surface normal, if it was sent in a slightly random direction (in a cone of directions centered around the surface normal).
I have moved my project discussion here: Path Tracing with Babylon, Background and Implementation, and if you have seen the hybrid rendering section, this is kind of the path that NVidia has taken with the rollout of RTX. They traditionally rasterize everything and then call ray tracing shaders on each pixel of the rasterized triangles for pixel-perfect shadows, reflections, refractions, etc…
As I mentioned in the above discussion link, the only issue however is that once you have the triangles rasterized to the screen, the shadow (or ambient occlusion) rays would need access to the entire scene and all of its triangles. So unless you have a good BVH in place for that shadow-query stage, everything will become non-realtime.