hmmmkay… so lets say I have a sphere and on this sphere I have a shader that I am using to generate a set of data.
Now can I take this information from the GPU and pass it back to a canvas or a buffer?
Sense the sphere is technically got a uv from 0-1 I could essentially map back all of the colors at the uv points divided by resolution back to a flat image for later use if I can get the information to bounce back.
need help with the FOV of the camera support in the shader vx so that the positions are exact, cause right now I am just using hacky values to make it “work” so I can move on.
So in order to control what is being seen by the camera, I need to exclude all other meshes but the “target”
I have tried layer masks, but this only seems to work when I do one pass.
Is there a easy way to have a camera only render a specific mesh, besides layerMask?
One way I tired but could not get to work was toggle all the activeMeshs to false and store which are which in an array, then reenable all of them after I was done. but that did not seem to work either.
Line 50 is where it is applied on the shader;
and
Line 681 is where the rotation Matrix is grabbed. but for some reason this is not working.
The Dark shadow should stay away from the sun, and you should never be able to see the clouds on the darkside.
Yeah, that unfortunately does not work when you rotate the mesh. But, on a good note I changed my method and just generate the tangent normal map from the height map now.
I also shifted all the textures to cubemaps and changed the planet to a 6 plane sphere to get rid of all artifacts.
Last night I finished my gradient editor so I am tying to get that all into a WYSIWYG planet editor for my project.