Hi, I’m trying to load a large obj file (1.3GB) using https://sandbox.babylonjs.com and it fails without any error message.
This file is valid as it loads correctly in paraview. Below are the file details:
After reviewing the OBJFileLoader code, it indeed receives a string containing all the file’s data. This will never work for the large files we want to load.
We’ll try to code an alternative loader that creates the different instances as the file is read using a ReadableStream.
The most challenging part will be splitting SolidParser.parse into several functions, as it expects to receive a complete array, whereas we will pass it chunks instead.
no it’s not an option unfortunatelly as we want to be able to modify the original file.
I managed to load 2.7GB obj file extracting the babylon code into my own code and applying a buffered reader (only in firefox as Chrome throws an OutOfMemory exception; but it works fine for smaller files 800MB ~~) ; It also works for obj.gz file. I’m now trying to apply the changes in a branch; once it’s ready a pull request will be created.
Currently, I’m migrating the code to the Babylon.js ObjReader class. However, it won’t be used as the default for the plugin mechanism because the file reading (and all file handling, as far as I understand) is done higher up in the stack, not directly within the class. As a result, we only retrieve a massive string, which makes object creation impossible.
This is why I’m implementing a separate method. Hopefully, Babylon.js expert architects can review this and decide on the best approach. However, making significant changes to the global fetch/read file mechanism is too large a development effort for me to tackle alone.
Sure, but I need to run the canva on a large machine calling headless Chromium via Puppeteer, thus increasing the memory size. Removing the string limit from BabylonJS (not passing as a string) will let the DevOps define the correct boundaries. In any case, in this moment it does not work with larger files and the user does not get any feedback when the file is too large