Converting the Oblivion radar display to NME

I’m doing research for a project I’m working on, and among many other useful resources about shaders out there, I found this one on Shadertoy. I need to be more familiar with the NME so I took this as a perfect opportunity to port the shaders to BJS using the NME.

Having some familiarity with these types of tools in the past, I was able to quickly - but not painlessly - adapt my thinking to how the NME works, and I’ve still been pleasantly surprised at how quickly I was able to put together this procedural texture (note - WIP).

Something I can’t figure out (yet) is why the texture looks great in the NME, but terrible when I apply it to a plane mesh (see here). I’ve tried fiddling with the resolution and the mesh size but there’s probably something obvious I’m missing.

5 Likes

It seems your shader is depending on the screen size: if you detach the preview window from the NME and increase the size of the window, you will see you get the same rendering than in the Playground:

It seems to be a centering problem:

  • when changing the height of the window we can see the center of the cross does not move (whereas it should)
  • when changing the width of the window the center is more and more off-centered to the right when increasing the width
5 Likes

I am using the screen size to find the middle of the texture - basically dividing the width/height by 2.

What is a better way to find the center of the texture?

1 Like

That seems fine to me, the problem must be somewhere else but the material is a bit complicated so it is not easy to tell where…

Using engine.getRenderWidth / getRenderHeight for the texture size help:

https://playground.babylonjs.com/#8S19ZC#5

2 Likes

Since it’s a proc texture, I would expect that it would be stretched or shrunk to fit the mesh that it’s parent material is applied to.

If the ScreenSize node corresponds to the size of the texture as I’ve been assuming then the problem may be outside the shader; otherwise I think I would need to map the texture into UV space somehow? (I thought I was already operating in that space…).

Something I noticed is that although the Sqrt node is available, when you drag it onto the surface the TrigonometryNode created doesn’t have Sqrt in the drop-down list of math functions. Could that be the culprit?

The only uniform that is in world space is the radius scalar, which IIRC is currently at 256 - I’ve tried setting different values but it doesn’t really change much. The original example has this value at 240, and I just realized the render is 1280x720, not square… :man_shrugging:

No, ScreenSize is the size of the screen, not the size of the texture. There’s no block for the size of a texture, you should use two inputs for that if you need them.

No, even if “Sqrt” is missing from the list (we are going to add it), it is still a “sqrt” you get in the shader code when you use the block.

[EDIT] PR:

3 Likes

My bad – I was looking for the NME equivalent of what I had understood to be the standard iResolution constant. The original shader (see OP link to shadertoys) uses it to get the resolution as a vector2, which I couldn’t directly find.

Does anyone know what, or if, the iResolution constant is available? It seems a bit silly that a proc texture doesn’t know the size it’s being rendered without needing to add an extra input for it manually, seems like it’s something useful

iResolution in ShaderToy is the resolution of the screen so it is the same thing than the ScreenSize block. But in your case your texture is not the size of the screen, that’s why you need the texture size instead.

We don’t have a block for that. In fact I don’t think it would be a separate block but two additional outputs instead, but we can’t add them, though, because textureSize (which retrieves the width/height of a texture) is not available in WebGL1.

1 Like

I get the desire/need to protect backward compatibility, but don’t let’s not toss the baby out with the bath water - this is a solvable problem that, incidentally, is unlikely to be a one-off if support for two generations -removed shader API’s is to be maintained!

Perhaps some feature detection nodes to accompany those sexy new LogicalType nodes would allow a balance? Or maybe a feature flag drop down to select a targeted API version (e.g., WebGL, WebGL2, WebGPU), which would then light up or darken incompatible node types (I like this option the best)?

Being able to use textureSize would open up more potential for creators to create IMO

Well, it’s not as if there was not a simple workaround: simply use a vec2 input to pass the width/height of the texture.

I’m not sure @Deltakosh would be in favor of a target API, but we are definitely open to a PR that would add these outputs and would work both in WebGL1 and WebGL2/WebGPU.

1 Like

I’m trying to think ahead past just this particular example when I bring up a target API feature. Yes, there are other ways to achieve the intended effect without it, but if it is inevitable that we will need that kind of functionality, I think it makes the most sense to try and tackle it with something relatively simple (simple as in the texture resolution part is unlikely to get in the way of discovering how the target api feature will work) like this type of example.

Otherwise, every single solution to this category of problem will end up being unique one-offs, which isn’t great for maintainability I don’t believe.

HTH

I’ve added a vec2 uniform to pass in the texture size, and I found myself with two problems:

  1. calling proceduralTexture.setVector2("iResolution", ...) doesn’t update the uniform value. I tried u_iResolution as well to no avail. I ended up just retrieving the input block and setting the value directly (PG) and now it looks great… as long as I keep it at 512 resolution… which is the second issue.

  2. No matter what combination of resolution specified in the input or in the call to create the proc texture, it never seems to be able to get the right location for the center of the texture unless it’s at 512.

I’ve attempted to clean up and resolve what turned out to be minor bugs in the NME, so hopefully it will be somewhat readable to others - I know it’s barely so for me!

Yes, that’s how your are supposed to update the inputs of a node material.

You can use (screen.position + (1,1)) / (2,2) * textureSize (screen.position coordinates are between -1 and 1) instead of FragCoord so that the display in the preview is ok even when you change the texture size: I think the preview is using a 512x512 texture. If your texture is not 512x512, the display will be off-centered in the preview window when using FragCoord.

It seems the float input that has a 480 value is used to scale the black circle. For eg, if I use a 2048x2048 texture I get (I also used Radius=1 to get a perfect circle):

If I now use 1024 instead of 480:

So, you simply need to use a bigger value so that the circle fills the whole texture. Using 1500 (with a 2048x2048 texture):

=> the smallest value to use to fill the entire texture is (textureDim / 2) * sqrt(2). Here: 1024*Math.sqrt(2) ~ 1450.

PG: https://playground.babylonjs.com/#8S19ZC#14

3 Likes

Thank you so much for helping me understand where I was mis-applying my maths! Your explanation is incredibly helpful.

I’m going to play with this and post an update later today

Thanks again!

I’m curious – if that’s how uniforms are updated in a node material, what are the various setVec2 functions doing and what might they be used for?

I don’t see any setXXX function on the node material(?)

Im sorry, the functions are on the procedural texture you get by calling nodeMaterial.createProceduralTexture

Ok, those setXXX functions only work when creating a procedural texture the usual way (by providing your own shader), not through a node material.

1 Like

Thinking out loud, I wonder if it might be a useful improvement to make those functions work:

start by subclassing the procedural texture type leaving signature the same, but return more-derived type from the Node Material.

Then, override the various setXXX functions to delegate to the getInputNode funcs, either with or without parameter name prefixing. Consumers of the procedural texture then don’t have to know anything about the NM, and therefore don’t have to be also responsible for loading the NM and creating the proc texture. Useful?