I have a question RE performance and approach. I want to provide set of Vector3 to the NodeMaterial. And I made it but I guess in some…specific way . What I’ve done:
Creating the raw texture with data about points in space
Providing this data through the texture in NodeMaterial
Getting “live” vectors from the texture using loop inside NodeMaterial
I got the visual that I want. But I faced with performance issue. When I have like 1-10 spheres it is fine. But when I add like 1024 or so I see that shader takes like 5.6-6ms on RTX 3080. Sooo…I seeking for help to improve approach in data providing to NodeMaterial. Because of cycle (I quess) inside NM I loose a lot of performance. If anybody knows how I can handle it please, help me)
Unforunatelly no( Almost the same. I made a mistake in my post I got 5.6-6ms for 2048 points. With 1024 it was like 2-2.5 ms. And it is very close to performance in your pg. But thank you anyway
I don’t think there’s a solution with the current algorithm, as there’s a loop of 1024 iterations for each pixel, and no way around that…
If you are restricted to a plane (as in your PG where the Y coordinate is fixed), you may try to devise an algorithm that would use the X/Z position of each sphere as a coordinate inside a texture, and draws the influence of each sphere in the texture additively. It would probably be easier to do it in WebGPU with a compute shader… In the end, you would simply apply the texture to the plane.
I thought about such an approach. But unforunatelly I planning to have a very big plane. What I could do is update this texture depend on where am I in space. It will works but I wanted to implement it other way. Because I may want to go by Y coordinate in some cases. Anyway for now XZ will be enough for me. I will close this topic for now and open it again if I will have more questions. Thanks for your replies, guys