You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is naively implemented and for large sets of traces it is kind of inefficient.
We could implement a strategy that detects chunks of trace idxs that are contiguous in memory and read them in batches.
Should also implement a simple memoization strategy for duplicate entries.
Add a batch method which lets you pass in multiple traces with potentially redundant data to take advantage of the memoization and chunking. This will be fairly common for long lineage traces with common ancestors.
The text was updated successfully, but these errors were encountered:
This is naively implemented and for large sets of traces it is kind of inefficient.
We could implement a strategy that detects chunks of trace idxs that are contiguous in memory and read them in batches.
Should also implement a simple memoization strategy for duplicate entries.
Add a batch method which lets you pass in multiple traces with potentially redundant data to take advantage of the memoization and chunking. This will be fairly common for long lineage traces with common ancestors.
The text was updated successfully, but these errors were encountered: