-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Partition all keys function #510
Comments
Thanks for writing this up, makes a bit more sense. re: the ordering of keys, if i've got you right you mean the order relayer pallet stores registered accounts. Is your concern that when a new user registers, they are effectively just put in a arbitrary position in the queue for the next proactive refresh? And it would be better if the queue ordering took into account when the account was registered? Not sure if i've got this right, but if so it wouldn't worry me too much. Everyone is going to get a refresh at some point, no? |
Ya well not arbitrary order but under the hood substrate uses (either rocksdb or paritydb still not fully sure) but rocks uses sorted order https://github.com/facebook/rocksdb/wiki/RocksDB-Overview#3-high-level-architecture. So instead of just like appending a new entry it will order based on the address itself. The issue becomes when the network gets so big that it takes multiple idk weeks maybe to refresh the whole network and the network keeps growing, granted this is not a huge issue now but a good issue to have one day, in this case there may be a situation where keys are waiting too long for a refresh |
A possible solution which I don't hate is allowing the chain to select the keys on the new session This would create an O(n) op onchain and need two more storage slots, one for when a key was last refreshed and 2 is when the chain should select a key for refresh (like at what blocknumber. But I think it solves this and I like it |
@JesseAbram so to clarify here, would the idea would be that every session the chain would perform a proactive refresh on its own? Or just prepare for the next time to perform proactive refresh? |
yes the first one it would, be the network chooses a portion of the network to proactive refresh then does it |
This isn't relevant anymore after #941. |
Currently partition_all_keys in #504 takes a rotating batch of all keys for a proactive refresh. This is done to not have a proactive refresh have to refresh the whole network as that could cause undo stress on validators partaking
This works for when registered keys are small but as they grow and as keys get inserted and ordered the algorithim doesn't really handle the ordering. It will just take keys in position from ex. 0-10 in the DB, then next refresh from 10-20 etc with no consideration of when and where those keys were inserted.
possible solutions
The text was updated successfully, but these errors were encountered: