React-babylonjs, endless webRequest()-s on multiple onscreen canvases

Hi guys

I have the following setup:

  1. Fetching some items from my BE, an array of objects, inside each of them a model’s name eg. “MODEL_1”
  2. I have a .map() that outputs multiple canvases (9 per page) (using <Scene/ > from react-babylonjs from @brianzinn brianzinn
  3. inside those I specify my s3 bucket rootUrl
  4. I also specify the sceneFileName passing the model’s name

example of all the above:

      <Model
        receiveShadows
        alwaysSelectAsActiveMesh
        name={mockModel}
        reportProgress
        position={new Vector3(0, 0, 0)}
        rootUrl={rootUrl}
        sceneFilename={`${mockModel}.glb`}
        scaleToDimension={25}
        // onModelLoaded={handleOnModelLoaded}
        // onModelError={handleOnModelLoadedError}
      />

everything works but inspecting the network tab, it continuously requests the specified .glb model endlessly! the initiator seems to be webRequest() from babylonjs itself, example of the initiator of the many requests:

Everything slows down because of the number of requests…

1 Like

It sounds like the <Model> component might be requesting the glb on every render. This could be an issue with the useSceneLoader hook not correctly caching files? react-babylonjs/useSceneLoader.tsx at master · brianzinn/react-babylonjs (github.com)

I’m sure Brian can tell us more :slight_smile:

2 Likes

I was going to say, it seems it is requesting it on every frame/render exactly.

By a little bit of detective work, seems somehow it has to do with the retryLoop not recognising it has been properly served and downloaded (status 200 and I actually get the file and it displays).

Basically this highlighted bit:

1 Like

Curiosily enough when I feed JUST one item in the array this behaviour does not happen…:thinking:

Thanks @DarraghBurke , reading that file made me think…could it be because sceneFilename is used as a key the actual issue?

What I mean is, I have multiple models that are the same (with diff materials) ie. MODEL_1 and have the same sceneFileName, the fetching might invalidate the key somehow and re-trigger on a endless loop causing this?

ps: invalidating with something like does not solve the issue though:

        <Model
          receiveShadows
          alwaysSelectAsActiveMesh
          name={`Box?invalidateCache=${Math.random()}`}
          reportProgress
          position={new Vector3(0, 0, 0)}
          rootUrl={mockRootUrl}
          sceneFilename={`Box.gltf?invalidateCache=${Math.random()}`}
          scaleToDimension={25}
          // onModelLoaded={handleOnModelLoaded}
          // onModelError={handleOnModelLoadedError}
        />
1 Like

That is a horrible bug for sure. I was not thinking of that scenario when I had built that out. It would say the cache is being cleared on subsequent scene loads and then when the thrown promise resolves (in the <Suspense ../> and triggers a reload that the cache is gone. LOL. kaboom. :bomb:

edit: probably need to add a scene dispose() listener and then clear cache for each scene. Looks like React.Cache won’t make it into React 18 ( Built-in Suspense Cache · Discussion #25 · reactwg/react-18 (github.com)) - do you think that would work?
PS: would be so cool if we could load models not attached to scene as this would all be solved. I started a thread on it and I know I can write that preloader for OBJ loader with clean caching, but I haven’t taken the time to look into glTF. Unfortunately there is no really clean solution with how babylonjs doesn’t load in stages or separately allow caching without being attached to a scene.

1 Like

If you just put in a SceneLoader on your own with a useEffect that would work really well to get around your current problem. If you are using the fallback on Suspense then you will need the cache invalidation that I mention above.

1 Like

Hi!

Ah! Ok so at least I am in peace now and can start the creative process of finding a solution ahahah.

So, what I was trying to do before but then got stuck in circle, was building a sort of internal cache in the sense of: since I will have repeating 3d glb models in different pages of my paginated user view, there’s no sense in having a network request at all if we have it in our cache.

I was thinking to build something like:


const models = {
 "MODEL_1": <base64EncodedModelData>,
 "MODEL_2":  <base64EncodedModelData>,
...
}

Then checking for the presensence of those and conditionally re-downloading them.

Do you think this in combo with a scene loader of my own might work?

The part where I got stuck was:

  1. how to do a custom scene loader with the above? (high level, not in details)
  2. is it “fine” to just get the model, encode it in base64 and keep it like that? won’t the ram just die?
  3. how do I load the base64 string with react-babylonjs? (in practice, which component to use?)

the way I was saving it, which didn’t seem to work lol, was (pseudo-codish):

const response = await axios.get('https://whatever/model.glb')

if (!models[modelName])
  setModels({
    ...models,
    [modelName]: `data:;base64,${window.btoa(unescape(encodeURIComponent(response.data)))}`,
  });

then I was left on ho to actually load the model with its base64 data with react-babylonjs, trying combination of things that didn’t work (such as and ), was mostly relying on the last part of this: Loading Any File Type | Babylon.js Documentation

I like the direction you are going. I think the built-in IndexedDb may be a worthwhile read (it’s what babylonjs uses if you configure it - some polyfills have nice fallbacks depending on what you need to support):
IndexedDB API - Web APIs | MDN (mozilla.org)

I wish it was fine to just get the “model” file and keep it like that. Unfortunately, many loaders will load separately their textures or additional assets for materials, etc. It may work well in your scenario if your entire model is in one file. glb is the binary format of glTF - I believe you can get away with your proposed solution.

You are right about the memory. It may be worthwhile to use something like an LRU cache, but then it can start to get quite a bit more involved to code. I am happy to update my cache to be “per scene” as that would likely work well - assuming that I do not have a memory leak and my scene is tore down cleanly.

The browser will also cache HTTP requests, but that is entirely up to the browser - it has opportunity to speed up subsequent loads.

What the react-babylonjs library is doing is throwing a promise (like the axios get) and then when it resolves it loads the asset. You might get some good mileage from the useAsset pmndrs/use-asset: :package: A promise caching strategy for React Suspense (github.com). Would be a cool recipe to get that running without SceneLoader automatically attaching everything to Scene and then you could do really cool things like preload and load to different Scenes.

There is an interesting conversation that drcmda joined here:
useLoader vs Model · Issue #87 · brianzinn/react-babylonjs (github.com)

let me know what you think.

1 Like

Thanks for the exhaustive answer!

Yep, all my models will be thankfully in .glb so I can use the binary response and base64 encode it to then load it…I mean I wish I could do that, because when I tried I had all sort of issues.

What I did is using useAsset to download the file as an arrayBuffer, base64 encode it and then use the vanilla SceneLoader.Append() passing it the base64 encoded glb, problem is it never liked it, it just likes simple ones like the one they have in the example which has no materials and is basically super simple.

For reference: https://hirex3dmodels.s3.eu-west-2.amazonaws.com/game/models/weapons/W_RACE_2_SWORD_2H_A_1.glb that’s the file.

My options:
1.So loading apparently base64 medium complexity assets in that way seems not be possible? (I might be wrong, I hope as I think in my use case that’s my only chance?)
2.I can’t also use the vanilla react-babylonjs loader either as it has the caching issue.

Saving network calls at this rate would be a luxury for later ahah, I am aware the files are cached at least as as you said so that’s a plus.

I guess what you said above:

Blockquote
If you just put in a SceneLoader on your own with a useEffect that would work really well to get around your current problem. If you are using the fallback on Suspense then you will need the cache invalidation that I mention above.

Would involve the base64 method which seems to be broken (please someone correct me if I am wrong) too right?

@MelMacaluso created a PG for you:
load GLB | Babylon.js Playground (babylonjs.com)

1 Like

I can fix also the scene loader with your scenario. I haven’t done a release in a while, I am 1/2 way through redoing the documentation site! Been a bit swamped lately, but let me know where you land on this.

edit: I would also be pretty happy to add this as an opt-in mechanism. It fits the React cache/Suspense model properly whereas babylonjs breaks it entirely as you saw. This will also properly allow preloading and moving assets across Engines - unfortunately only for single file assets, but I think that’s a good start.

1 Like

That would be amazing, I will give it a try adapting to your playground (I was so close lol) and see where I can end up, will post the “solution” afterward. (Feeling optimistic ahaha)

1 Like

Ok, wow that was a little fun all-nighter challenge (it wasn’t fun ahah).

What I did to solve my “avoid extra network calls if the model has been already downloaded” + “cache them” was (roughly):

  1. Create a util function that downloads the .glb model as a blob, encodes it to base64 and saves it as a cached asset (using use-asset library).
  2. Call that util function whenever the page is requesting to display certain models (modelName being the prop)
  3. Getting the asset with asset.get(‘modelName’) and appending it into the Babylon scene with SceneLoader.Append
  4. ???
  5. profit! (cached 3d assets / no extra network calls)

VERY IMPORTANT NOTES:

  1. Must use babylon 5.0 otherwise the base64 import is not processed properly
  2. I had to strip out (sadly) all react-babylonjs related code as in chrome android it had problems like “cannot initialise index buffer” or alike. Thanks god by stripping it out it sorted it otherwise I would be helpless…

BUT:

I know this is nuts, over-engineering, and probably off topic but in the ideal world…

I’d like to avoid having 9 scenes but have the same canvas duplicated/cloned if the final model is the same (because I GUESS will save some computing power in doing that? maybe I am wrong on this one, no idea how Babylon works/optimizes underneath).

Yay - Glad you got it working! Can you share your util code (is it just a useEffect with useAsset)? I’d like to add an opt-in mechanism for useSceneLoader and Model as this will allow a lot of cool functionality like pre-loading, loading models across engines, etc. With Suspense should be able to fallback for everything and this will work without updating the Model Loaders! The main limitation I see is that it needs to be single file assets to load smoothly.

I’m working on a new render loop that uses an observer if the scene is not visible to not render the scene (opt-in mechanism). I think it’s a good option for rendering multiple model viewers. If your scenes don’t all fit in the browser window that could save a lot of computing power - it’s still a work in progress, but it’s on the master branch now, but not NPM – You can check react-babylonjs/src/Engine.tsx, but this is the snippet:

const useCanvasObserver = (
  canvasRef: MutableRefObject<Nullable<HTMLCanvasElement>>,
  shouldRenderRef: MutableRefObject<boolean>,
  threshold: number = 0
) => {
  const callbackFn: IntersectionObserverCallback = (entries) => {
    const [entry] = entries
    shouldRenderRef.current = entry.isIntersecting
    console.log('should render updating:', shouldRenderRef.current)
  }

  useEffect(() => {
    if (canvasRef.current === null) {
      return
    }
    const observer = new IntersectionObserver(callbackFn, { threshold })
    observer.observe(canvasRef.current)

    return () => {
      if (canvasRef.current) {
        observer.unobserve(canvasRef.current)
      }
    }
  }, [canvasRef, threshold])
}

Feel free to bring any of that into your project.

It is working brilliantly and is doing all the savings / caching as intended. What I would like to do is mix it with a AssetsManager (or alike) so that I can load/cache in the background all the models while the user is seeing the current page, so that, when the next page comes, everything is ready to be seen.

Do you think that’s achievable?

ps: when I finish my 9-5 shift, I will post the rough full solution, indeed, sure!

Ah yeah, that’s actually something I didn’t think about, “lazy” load the scenes to be rendered or not according to the current viewed viewport.

As long as the toggle to make them viewable when you scroll back is quick enough and noticeable that’d be a very good optimisation indeed.

Maybe with a little offset, to make it even less noticeable.

Maybe is complete madness, but to achieve that, we could, potentially, traverse the scene.meshes and not make them visible or not. (not sure how babylonjs works in-depth but I assume it uses computing power according to the amount of verts displayed?).

So something like:

const toggleSceneMeshes = (meshes, show) => meshes.forEach(mesh => mesh.isVisible = show)

That triggers according to the viewport seen, either with the mutation observer or whatever.

Hiding all the meshes certainly would help performance! What the IntersectionObserver is doing though (in react-babylonjs as an opt-in mechanism) is if part of the canvas if visible (threshold defaults to 0, but you could set higher) then the render loop will render the scene - otherwise it will do nothing. For a model viewer that has 9 scenes + Engines on a single page then if 3 are visible - you should be able to notice a big difference and you don’t need to write any extra code to hide meshes. I am doing some testing on my new documentation site with above.

Amazing, I will deffo reintroduce react-babylonjs as soon as we have something stable regarding the reason for the creation of this thread (I really passionately dislike imperative coding but had no choice as a pressing deadline on our end forced me to go this way lol) and hopefully will take advantage of that opt-in optimization too!.