Fail to load large OBJ

Hi, I’m trying to load a large obj file (1.3GB) using https://sandbox.babylonjs.com and it fails without any error message.
This file is valid as it loads correctly in paraview. Below are the file details:


Am I doing something wrong ?
File can be found here: Ferrari Roma Spider.obj - Google Drive
Regards

1.3BG will go above the 2GB of ram we are constrained to by the browser when unfolding data.

One approach could be to compress the glb with ktx and draco

Seems too big for a string, any luck with parsing in binary, or TextDecoderStream?

After reviewing the OBJFileLoader code, it indeed receives a string containing all the file’s data. This will never work for the large files we want to load.

We’ll try to code an alternative loader that creates the different instances as the file is read using a ReadableStream.

The most challenging part will be splitting SolidParser.parse into several functions, as it expects to receive a complete array, whereas we will pass it chunks instead.

I did a few pull requests on the OBJFileLoader so I got quite familiar with the code. I’m pretty sure

this will be daaamn chalenging!

Wouldn’t it be a better option to split the mesh/meshes from the one big OBJ file into multiple meshes/files and load them piece by piece?

If it’s an option to change the mesh, it should be much faster to use a gltfpack tool to convert the obj to a glb.

1 Like

no it’s not an option unfortunatelly as we want to be able to modify the original file.
I managed to load 2.7GB obj file extracting the babylon code into my own code and applying a buffered reader (only in firefox as Chrome throws an OutOfMemory exception; but it works fine for smaller files 800MB ~~) ; It also works for obj.gz file. I’m now trying to apply the changes in a branch; once it’s ready a pull request will be created.

2 Likes