-
Notifications
You must be signed in to change notification settings - Fork 478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gltfpack: Substantiall reduce memory footprint #147
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This will let us process larger scenes with gltfpack.js
We now reserve the space for mesh and stream data and parse the data directly into the target vectors. This avoids the cost of the copy and reduces the peak memory consumption for mesh-heavy scenes; on Thai Buddha model this reduces peak RSS from 1390 MB to 1198 MB.
We now reserve the space for animation data before parsing it similarly to mesh data. Also clean up mesh data parsing a bit to follow a similar push_back + back structure for streams.
After we parse mesh and animation data, we can usually free the buffer data. The exceptions are when the buffers are referenced by images or skins, since we don't copy these out. We also can't free the GLB buffers yet because this requires freeing JSON data which requires fixing extras[] access. This reduces max RSS when processing Thai Buddha scene from 1198 MB to 893 MB.
Instead of always generating the fallback buffer, we only do this when the fallback is actually requested. This saves us some time to copy the uncompressed data, and means we don't need to create uncompressed fallback in memory just to throw it away. This reduces max RSS when processing Thai Buddha from 893 MB to 822 MB.
This will make it slightly easier for us to implement extras evacuation.
Instead of reading the extras data directly out of data->json, we copy the data we need into a std::string during parsing. This allows us to reclaim the memory used by the JSON data early, something that will happen in a separate change. Ideally we wouldn't need to do this since cgltf could do all of this itself, but this is a conservative change that doesn't require upstream work.
Since we now extract extras into a separate string, we can free .gltf file contents immediately after parsing it. If we're parsing a .glb file, we need to hold onto the buffer data until we parse it fully, but once we do - if we don't have any skin/image data inside - we can free it as well. This reduces the max RSS for stadium.glb from 100 MB to 75 MB.
Checking data::bin is more direct and cleaner. Also we can use `s` directly instead of streams.back.
zeux
added a commit
that referenced
this pull request
May 27, 2020
We would free the buffer that's backing the binary data since we don't need it anymore but the buffer object would still point to the deallocated memory, causing a crash when cgltf_free frees it later. This is a regression from #147; it was missed because the CI didn't test gltfpack on .glb files - well, now it does.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change makes gltfpack more memory efficient when processing large scenes through a combination of being more efficient with memory allocations and freeing excess memory early:
As a result, a JSON-heavy scene goes from 121 MB to 75 MB of RSS for processing (-38%), and a mesh-heavy scene goes from 1390 MB to 810 MB (-42%); so it looks like we now need ~40% less memory on average.
Additionally, this change switches gltfpack.js to 4GB builds, so that with recent v8 it can use the full 32-bit address space.
Fixes #140.