Decomposing Geometry output in NGE / Node geometry editor

Hi all,

Using the Node Geometry Editor (very exciting!) I’m trying to progressively refine geometry by applying multiple transformations to the geometry in sequence.

Here’s a playground to illustrate, where I’m trying to perturb the vertex positions twice with two separate noise layers (ultimately I’d like to do some kind of layered domain warping):

https://nge.babylonjs.com/#H17N49#1
The frame on the right contains the ‘subsequent’ transformation I’d like to achieve – re-using computed normals from the first phase to directionally ‘warp’ / offset the second phase.

I’m seeing a few ways to construct and update ‘Geometry’ in the graph but not a way to decompose it to its parts (positions, normals, etc.) for further refinement.

Is that by design? Or am I missing something perhaps?

Thanks!!
Chris

So here we are:

If you pipe the geometry after the compute normals, he Normals contextual will pick the new normals from the geometry and you can continue your build-up

Does it make sense?

Hey @Deltakosh,

Thanks for the quick response! :slight_smile:

So compute normals mutates the input geometry buffer rather than passing forward a new recomputed buffer?

If you pipe the geometry after the compute normals,

I don’t really follow what you mean by ‘after’ here. My intuition was that both Normals
nodes (and then, the following math (multiply) nodes) would begin executing independently of each other in directed graph traversal order, because their descendant nodes don’t converge until later at the Set positions node.

Could you explain a bit further please?
Thanks!

Let me try differently :slight_smile:

Step 1: Create the root geometry
Step 2: consume the root geometry and creates a new one (geometry2). In that case positions and normals contextuals refer to the root geometry
Step 2.5: Geometry2 gets computed normals and becomes geometry2.5
Step 3: Consume geometry2.5 and produce geometry3. In that context, normals refer to geometry2.5 normals

Does it make more sense?

(Unless I do not get your answer, which is totally possible lol)

Afterthoughts: Maybe you wanted to reuse normals in step 3 from step 2? so not the one attached to the current geometry?

cc @Propolisa

Hey @Deltakosh apologies, didn’t have enough time to fully study your response initially and forgot about it. Thanks for the added detail!

Step 3: Consume geometry2.5 and produce geometry3. In that context, normals refer to geometry2.5 normals

I guess what’s throwing me off is that the Normals block in group 3 doesn’t have any incomers from geometry 2.5. (which has different normals from the ‘root’ geometry). Is data flowing backwards from ‘Set positions’ (which is passed the geometry from 2.5)?

I might have an incorrect understanding of how the graph represents data at a high level, I should definitely read more docs when I have time. (The most similar thing I’ve used was Blender’s material node editor).

My original question was more general though, about whether it’s possible to break down a ‘geometry’ type output into some fundamental vector types (normals, positions, colors etc) and apply custom vector-type transformations on them. I think that might be useful for workflows that aren’t covered in my noise warp example.

What do you mean? The contextuals (like position, normals, etc…) all are related to the root geometry being used

Ugh, my brain melted while trying to write something coherent here, please excuse if it isn’t. Thanks for your patience. Too many new concepts recently in hobby game dev time :slight_smile:

I wasn’t understanding where the ‘contextual’ is coming from; it couldn’t be the ‘first’ geometry in the scene because in geometry editor we can have many top-level geometry emitting nodes.
And because of my other experiences with graphs (e.g. DAGs) I wasn’t expecting backpropagation / lookaheads to derive values. That’s what I meant with:

because their descendant nodes don’t converge until later at the Set positions node.

You answered this for my example playground:

the Normals contextual will pick the new normals from the geometry and you can continue your build-up
and

[…] In that context, normals refer to geometry2.5 normals

But I just didn’t understand the logic for how the normals (and other) contextuals were being resolved.

Seeing this doc section helped:

A contextual value is derived from the closest geometry node associated with this node.
[…]

So I made this to understand it better:

It’s a playground using colors instead of normals, where colors are initially set for two separate geometries and then updated for an aggregate geometry. The color in the final Set Color does not reference the original geometries (where only one color is used in each) but the aggregate geometry (where colors are defined for all vertices). (This playground might not actually prove anything if the data structure underneath isn’t what I think it is.)


My initial assumption using the editor was that we could explicitly decompose and compose geometry on the fly, like this mockup (recompose not shown but would perform the inverse):

(subsequent transform nodes would apply per vertex)

I don’t know if this would be more capable than the current pattern, just wanted to explain my mental model. It’s not a feature request yet because I don’t know if it even works with the underlying design pattern and I’m still a noob to using NGE anyway.

We can do it already.

Do you mind telling what would be the expected outcome of your example here: Babylon.js Node Geometry Editor

This example shows that you are combining two geometries (blue and green). So you end up with a new geometry where the first half is blue and the second is green

Then you apply a color change to the new geometry (on all vertices). So you end up with a new geometry where the first half is violet (blue + red) and the second part is yellow (green + red)

What would be your expected outcome?

We can do it already.

Ah, I didn’t mean to say otherwise, apologies if it came off that way. Your examples and the docs are clear to me now how we can achieve the result I wanted. Only for thoroughness I wanted to show the interface I was imagining (single node combiner / splitter approach like the editor has for vectors / matrices).

Thanks for your work with the explanations, the editor and BabylonJS! I’m enjoying it a lot in my current project.

This thread can be closed :slight_smile:

No worries! I know the concept is not that easy to grasp :slight_smile: