I am trying to replicate the effect mentioned in this twitter post, where parts of the mesh farther from the camera get treated like they’re in shadow:
My current method is to darken the color based on distance from the camera, limited by a range that would kinda be the depth of the model. The problem I am coming across is that when I zoom in/out the shading on the mesh changes, when I’d like to keep it consistent.
I think I could solve this by passing in positional data of a sort-of mid-point for a model, as a uniform, and use that to help determine what depth the shadow should start and end at, but I am really new to shaders and wondering if there is a better, or proper way?
What you want is to make the calculation relative to the origin of the mesh coordinate system, so that it is independent of the camera position. You can obtain this origin thanks to the world matrix, in the translation part (last row of the matrix):
distInsideSphere computes the distance between the intersection of a ray AB with a sphere and point B. Here, A is the position of the camera and B is the position of the vertex (in world space). The position of the sphere is the origin of the mesh coordinate system and the radius of this sphere must be chosen so that it surrounds the mesh.
Note: I got the
sphIntersect function from Shader - Shadertoy BETA
Thanks! Your response is really helpful, especially the sphere example.
It’s been tough sorting through the docs of the built-in variables and how to actually understand them. I didn’t even realize the world matrix already contained an origin.
For a shader/material applied to a mesh, is the origin of the world matrix calculated as the same as the scene’s origin/center or is it actually the origin of the mesh the material is applied to?
EDIT: on the above, it sounds like that’s what ModelView matrix is for.
world matrix holds the translation of the mesh in the world space. We don’t have a modelView matrix in Babylon.js, only a view matrix, which is transforming from world space to camera space.