-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bit 578 speed up metagraph storage query #961
Conversation
* add * fix comment and verify fields in test * verify neurons after load * fix calls * fix test * remove verify call * add tests for IO * change os exception name * modify func names and add support tests * typo oops * move function and mock correctly in test * call dict first so jsonable * add validate check during parse * add exception and test fallback * . * change naming and fix test * fix path for bin * update bin * fix name * forgot active * fix tests * add real hot/cold keys * oops, no comma * fix patching * update bins with fix * f * fix test mock of subtensor * iterate over keys only * don't pass params * remember to cast properly before check * add exception instead of general exception * mock get n and add main call * test using list equal * make list, not tuple * add is null
What keeps this PR from keeping blocked? Can someone pull and merge master? |
Hi @mrseeker, a couple of things here.
We will keep the community informed about this release. Thanks for your presence here. |
Also keeping this blocked is an update of the rest of our subtensor nodes/chain endpoints with the new flags. |
This should be good to go on our infra. Need to work on the migration for users, but most should be able to just use the new compose file. |
Closing as planned with new impl for finney |
Note: this feature requires using opentensor/subtensor#26 in the subtensor node that you query as the buffer size needs to be changed for the fast sync to work
This PR adds the subtensorapi package to grab live storage from the chain in a faster manner.
The current live-sync takes around ~8m (See #933) using only pure-python.
The subtensorapi package wraps a nodejs binary in python and utilizes the
@polkadot/api
npm library to sync from the chain.This sync outputs as JSON to
~/.bittensor/metagraph.json
(by default) and then is read into python before being returned to the bittensor package.Below is a current graph of the performance of subtensorapi (sapi) vs the ipfs sync (current cached-sync).
The results may be worse than average as the request times are very node-dependent. This node is hosted on a cheap contabo VPS with heavy traffic.
I expect request times to be similar to this, if not better.
Further, subtensorapi can be extended to support other storage values. Currently it also supports the
subtensorModule.blockAtRegistration
map usingSubtensor.blockAtRegistration_all_fast()