Fail to load large OBJ

Hi, I’m trying to load a large obj file (1.3GB) using https://sandbox.babylonjs.com and it fails without any error message.
This file is valid as it loads correctly in paraview. Below are the file details:


Am I doing something wrong ?
File can be found here: Ferrari Roma Spider.obj - Google Drive
Regards

1.3BG will go above the 2GB of ram we are constrained to by the browser when unfolding data.

One approach could be to compress the glb with ktx and draco

Seems too big for a string, any luck with parsing in binary, or TextDecoderStream?

After reviewing the OBJFileLoader code, it indeed receives a string containing all the file’s data. This will never work for the large files we want to load.

We’ll try to code an alternative loader that creates the different instances as the file is read using a ReadableStream.

The most challenging part will be splitting SolidParser.parse into several functions, as it expects to receive a complete array, whereas we will pass it chunks instead.

I did a few pull requests on the OBJFileLoader so I got quite familiar with the code. I’m pretty sure

this will be daaamn chalenging!

Wouldn’t it be a better option to split the mesh/meshes from the one big OBJ file into multiple meshes/files and load them piece by piece?

If it’s an option to change the mesh, it should be much faster to use a gltfpack tool to convert the obj to a glb.

2 Likes

no it’s not an option unfortunatelly as we want to be able to modify the original file.
I managed to load 2.7GB obj file extracting the babylon code into my own code and applying a buffered reader (only in firefox as Chrome throws an OutOfMemory exception; but it works fine for smaller files 800MB ~~) ; It also works for obj.gz file. I’m now trying to apply the changes in a branch; once it’s ready a pull request will be created.

2 Likes

Hi, any update about this? Your solution now works in Chrome?

No, it’s a Chrome memory limitation. You can find plenty of discussions about this issue online.
It works fine in Firefox, though.


Currently, I’m migrating the code to the Babylon.js ObjReader class. However, it won’t be used as the default for the plugin mechanism because the file reading (and all file handling, as far as I understand) is done higher up in the stack, not directly within the class. As a result, we only retrieve a massive string, which makes object creation impossible.

This is why I’m implementing a separate method. Hopefully, Babylon.js expert architects can review this and decide on the best approach. However, making significant changes to the global fetch/read file mechanism is too large a development effort for me to tackle alone.

1 Like

Sorry I missed what we should review? can you share more?

My pull request once I finished to code and see if you want to integrate this OBJ reading as default.

1 Like

Sure, but I need to run the canva on a large machine calling headless Chromium via Puppeteer, thus increasing the memory size. Removing the string limit from BabylonJS (not passing as a string) will let the DevOps define the correct boundaries. In any case, in this moment it does not work with larger files and the user does not get any feedback when the file is too large

here is the pull request