I need a sanity check. I’m making a starfield shader as a post-process effect. This is a loading screen so it needs to not involve too much asset gathering. The effect has to fade out in a particular way at the end of loading, so it has to take place in the current game scene.
Due to not having any geometry to work with I’ve generated a random starfield. Then I loop through the stars and project them into screen space, and if the current pixel is within a certain range it draws the star. I currently have the PG set up to try to debug the math involved with the screen projection.
Because it’s a post-process shader it seems like I have to pass in the camera projection matrix then multiply it by the camera position and the camera direction to get a useful value, but I know that getting a vec4 out of this still isn’t useful. Is there a way to create a mat4 from a vec4?
My value CS should be showing something other than black but it is not. V definitely shows some change in values when the camera is moved, and I’ve tried multiplying the value by various factors just to check that the numbers aren’t too small.
I’m not able to visualise the math for all of this in my head at the moment so I’m kind of just assuming that it’s working in this context.
Or is there just a better way to do this entirely? I could create a starfield material that generates its own geometry and render it, but this effect has to overlay the world so I’ll just end up doing a render target and complicating everything during the loading process.
How would you do this?