I’m doing an experiment where I am taking a reflection probe and moving its position incrementally in a volume then passing that cube texture to a shader that gets the average color of that location and reduces it to one pixel. I then read that value and add it to my buffer.
I have a prototype that Is using a pseudo 3d texture. But I would like to use a real 3d buffer and sampler as I think that would be easier to control.
The problems I am having with that is one, it does not seem like the reflection probe sees fog, which I was hopping to use to make the emissive colors on meshes falloff during the baking. The second is there is not very much information on visualizing 3d textures for debugging them and was hoping that someone had an example somewhere. Basically like make two planes I can slide through the volume and sample the 3d texture at that plane?
Oh and is it possible to force more levels of mipmaps? Ideally I would just be able to do like mip32 or something and reduce a 32x32 image to one pixel that way?
It seems that I’m not building or sampling the 3d texture correctly.
Essentially after the bake process is done, the slice planes should display at the very least red in the center of them not how its currently showing. It also does not make sense to me what they are displaying, so I assume that one I’m either making the 3d texture incorrectly or two I’m sampling it after incorrectly.
Scene to bake, is just a red emissive sphere and black plane to block the red sphere when under it while collecting the data
So its not the best method, but its not the worse either!
But this is going to be fun to use with my Doomish editor that ill be posting soon. It will be really cool to come with with some SDF methods to tailor and animate sections of the 3D texture, for like glitching lights and things on walls.
@Evgeni_Popov can you think of a way to make this process faster other then multiple probes going at once?
Right now if I set the 3D texture to like 32x32x32 this takes like 10 mins to complete which is not ideal. Lower resolutions are quite fast, but the factor of time goes up so quickly its kind of nuts.
I’m also wondering if it would be better to blast a ton of raycasts from my sample positions instead of using a probe, and have those recasts bounce once and record the colors that the ray collides with.
Honestly any input on this from another brain would be really nice.
If you are not limited by the GPU, you could do the processing several times in parallel, as they are all independent.
For eg, have 4 probes and call CaptureEmissiveVolume for each probe.
You are currently limited by the browser, which is updating the display only at 60fps. If you are able to fit 4 processing in one frame, you will cut the time by 4. Of course, that will depend on your GPU.