Basically I am working on a sphereTracing Renderer. However when a lot of objects are added obviously it slows down. I am considering different methods to speed things up, as selected by the user. One is to render to a smaller view then use that as a texture for a larger screen quad. However I can’t seem to find any examples of what should be fairly straight forward. I am obviously missing something.
The engine is assosciated with a particular context.
So I have been looking at textureCanvas and RenderTargets.
Basically I want to render my scene to a texture
Use said texture to render to a different scene which is larger.
I have two scenes, one with all the objects, one with a single screen quad.
Just need an example of where to begin with this, as all the renderTarget examples also render the original scene, not just to the target. Is there anyway to turn this off?
Just look up Pryme8 and Ray marching on the forum and playground search, I’ve got quite a few playgrounds and different marching setups.
There are procedural texture examples, render target examples, post process examples. I’ve done multiples of each you will just have to do some code diving. You might have to rollback what version of BJS you are on too for some of them to work.
Thanks for this. Ironically the raymarching I am fairly happy with. As it is a program to create raymarched models and is a lot of fun to play with.
Just can’t figure out how to NOT render to a full sized quad. Obviously it will be a lot faster rendering to a texture 0.25 the size, though you lose detail. But ultimately I want to use that texture and display at full size. Figure a little fuzziness is an ok trade off for speed if you can turn it on and off. Trouble is I cant seem to turn off the full sized render even if my rendertargets are much smaller which defeats the whole purpose. I can render to a separate canvas, but this needs a separate context (smaller and therefore a new engine). There must be a better way.
Agreed. I can produce a smaller texture and it is nice. But it still draws the large one at the same time it is producing it. Which defeats the purpose of drawing a smaller texture. However, just discovered the engine.setsize. so hopefully by setting that to the size of the smaller texture before the first pass, then back to full size at second the problem will be solved. Hopefully I can do that
Basically the problem is I want it to not do the raymarched scene at full size. but to a smaller texture. But although I can do it to a smaller texture which works fine. I cannot find a way to stop it simultaneously drawing the full size scene as well, as this creates it ready for the next phase of processing. The point it if it does this then the raymarched scene is rendered at full size it just produces on full sized output and 1 smaller one simultaneously. I am trying to prevent the full size generation as this is what takes the time.
So I am basically trying to render my scene to a small texture ONLY. And then use that texture to project it onto a larger full screen quad. But all the examples I can find draw the initial scene at the full size of its context while producing smaller/larger textures.
Just discovered the engine can now have multiple canvases assigned. That should solve the problem. As I can draw to a smaller canvas then use the rendertarget from that as the texture for the larger canvas.
If you just do a procedural texture its the same thing. I’m confused? If you use a PT you could set it to 1x1 and it would only ever worry about doing 1 pixel in the raymarch pass.
Im not too sure what you are trying to accomplish now or why you would do it that way, but kudos if you figured it out.
Sounds like you might not being doing something different then I understand, you are composing the entire scene in a shader right?
If so then it sounds like you are taking extra steps to accomplish the same thing, all you need is a PT and a postPass and you can render whatever to fullscreen from a texture ezpz with no (minimalistic) impact.
Maybe pm me your playground and I could give you a better rundown. <3
I am having a few probs with it.
a) The texture on the box does not show up until you get close. So obviously the preprocess is using some different camera settings. Not sure why.
b) The bigger you make the texture the more of the scene gets put onto it. Whereas I am trying to project the entire scene onto the texture regardless of the texture size.
c) I do not want the objects in the scene displayed anywhere other than the textured box. Unfortunately my understanding of Babylon is still insufficient.
The way I am currently getting around this is by rendering to a texture the same size as the main canvas. But using a viewport to generate the texture the size I want. With the inverse of the scaling applied to the FragCoords to scale the SphereTrace scene down to the same size.
Then I use this section of the renderTexture to render full size onto the canvas. But it is a bit of a trashy way of doing it as most of the rendertexture is unused if doing a large reduction. But it does work
Obviously having the rendertexture a different size (and probably more importantly aspect ratio different) throws the raymarcher out which makes sense as it use gl.fragCoords to work out the ray origin.
Another Oddity I don’t understand is as soon as I uncomment the viewport. Which currently is the same size as the canvas it stops displaying properly. The only reason for this would be an upset to the gl_fragCoords?
Best Working so far https://playground.babylonjs.com/#4V2MG8#13