test: added benchmarks for nim ngc catalog #336
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR includes benchmark functions for exploring the parsing process of the NGC catalog response.
Initially, we were advised to set the page size to 1000. Seeing this as a bit high, considering there were only 39 runtimes available at the time, we set it to 100 and opted to benchmark the parsing process to understand this better.
Jira: NVPE-80
Included in this PR:
make benchmarks
andmake nim_benchmark_documents
After running a couple of tests, it appears like our code works pretty much the same for 1000 runtimes received by either 1 or 10 pages, snapshots of various tests I ran:
I think the conclusion is that we can either keep the page size at 100 or bump it to 1000.
About this PR, I'm not sure there's a reason for merging this; I'm not sure if we'll need the work included here in the future. I'll open this as a draft and discuss it on Slack.
How Has This Been Tested?
Merge criteria: