importMesh with HTTPRangeRequests Question

I have a GLB file and its size is 520M. I loaded it using the importMesh method and enabled HTTP Range Requests. It worked fine, but I was confused by the size of the downloaded file, which clearly exceeded the size of the GLB file. Can someone explain this situation?




oh. The numbers look quite odd TBH, but I might not be reading them correctly (because of the lack of separators :wink: )

Would you be able to reproduce exactly what you are doing (even with a smaller file) on the playground? this way I can be sure we are both checking the same behavior

Ok, I’ll create a playground later

@RaananW I created a playground that could reproduce the question I ran into.

I think model ‘two’ is broken and powdered, is there any other way to attach this?

I am merged the ‘two’ model, This is the latest playground. What mode do you use to view it, and how does it look a bit like a point cloud?

scene.forceWireframe = true;

or

scene.forcePointsCloud = true;

What I meant was that ‘two’ has too many polygons, as in the form of scanned data. If this can’t be reduced, optimizations such as compressing the mesh with gltfpack(50mb → 15mb) or draco will be needed. 50mb is too much

thanks,Are you saying it’s because there are too many face? Since the model size used in my project is 500M, I don’t want to optimize this test model. I think when doing range loading, it should only download the resources it uses. That is, the sum of the range download resources does not exceed the size of the entire file.

I made a change to the ‘two’ model, and the assets for this example look normal in size. After the number of faces is too much, why does the resource size of the range request occur far more than that of the complete file?


There’s probably a combination of reasons

The size of the model is increased by many faces, materials, textures, high pixel textrue, uv mapping levels, etc.

First of all, a 500MB model would be explosive in terms of instantaneous resources as it compiles and decompresses, rather than 500MB at the download level.
So the rule of thumb is to spread this out by loading in chunks, compressing efficiently, etc. and as an example, a 200MB model would open fine on a high-end desktop, but on my mobile it would crash the browser

Range requests are intended to download a large resource model file in split downloads. My problem is that when I don’t use range download, I only need to download a 50M file, but when I use range download, the sum of multiple I need to download is much more than 50M.


What I would like to discuss is that the size of the assets I download using the network is much more after using range requests.

ImportMesh is a process that directly fetches a url, directLoads it, arraybuffers it into a binary, and immediately _loadFiles it.

If you set useRangeRequests
webRequest.setRequestHeader with the value of the range, and after the request, put the requested value in a Uint8Array, _unpackBinaryAsync it, and abort the fileRequest, but when requesting the range for that request, it only gets the bytes packed inside it, not the actual compressed model, and this seems to cause a difference in size from the actual compressed model, and the more faces or levels for the size mentioned above, the higher the compression rate, so the difference in size will be even larger proportionally.

I only understood the simple process, so
I’m sure @bghgary knows the overall logic behind it, so I’ll wait for a answer from him as well.

Thank you very much for your reply. I will also be waiting for his answer.

The main issue is that a range request if overlapping will result in the same data being downloaded multiple times. If you look at the network in the web inspector for the PG above, you will see that the following ranges are being requested:

0-19
0-19
0-19

This is the GLB header. I’m not sure why it’s being downloaded 3 times, but maybe it’s the glTF validator.

20-96571
20-96571
20-96571

This is the glTF JSON payload. Again, not sure why it’s done 3 times.

96572-10112083
10112084-52627207
14125732-52611749

These are binary buffers (geometry, textures, etc.) loaded by the glTF loader as range requests. The last two are overlapping by a huge amount and thus it will redownload a bunch of data that isn’t necessary. The GLB has to be laid out in a way such that overlap does not occur for range requests to be efficient. I didn’t dig into how this GLB is laid out in the file, but this is probably what is wrong.

See the caveats in the documentation.

1 Like

Thank you for your answer. I will reply again when I find the problem.

1 Like