Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: add FlatStateValue constructor for on-disk values #9269

Merged

Conversation

pugachAG
Copy link
Contributor

This logic is used in multiple places, so it makes sense to abstract it behind a method in FlatStateValue.
on_disk name is chosen over new since this logic is only applicable for values to be persisted on disk. Later we will also add inlining for values as part of Flat Storage Delta and those will have lower inlining threshold, so we will have another constructor.

@pugachAG pugachAG added the A-storage Area: storage and databases label Jun 29, 2023
@pugachAG pugachAG requested a review from nikurt June 29, 2023 18:43
@pugachAG pugachAG requested a review from a team as a code owner June 29, 2023 18:43
@near-bulldozer near-bulldozer bot merged commit 283c2bb into near:master Jun 30, 2023
nikurt pushed a commit that referenced this pull request Jul 2, 2023
This logic is used in multiple places, so it makes sense to abstract it behind a method in `FlatStateValue`.
`on_disk` name is chosen over `new` since this logic is only applicable for values to be persisted on disk. Later we will also add inlining for values as part of Flat Storage Delta and those will have lower inlining threshold, so we will have another constructor.
near-bulldozer bot added a commit that referenced this pull request Jul 12, 2023
* fix(state-sync): finalize using flat storage

* fix(state-sync): Applied values from state parts should be inlined when applicable (#9265)

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* clippy

* fmt

* logging

* scan from a hash

* refactor: add FlatStateValue constructor for on-disk values (#9269)

This logic is used in multiple places, so it makes sense to abstract it behind a method in `FlatStateValue`.
`on_disk` name is chosen over `new` since this logic is only applicable for values to be persisted on disk. Later we will also add inlining for values as part of Flat Storage Delta and those will have lower inlining threshold, so we will have another constructor.

* refactor: Move adjust-db tool to database tool (#9264)

Refactoring of `adjust-db` tool, moving it to the `database` toolset where there's of database tools should live.

* feat: remove fixed shards (#9219)

In this PR I'm removing the fixed shards field from the shard layout. I removed the fixed_shards field directly from the ShardLayoutV1. This implementation is quite hacky as it does not follow the typical flow for making protocol changes. This approach needs to be discussed and agreed on first but here is an implementation just to get a feel for it. 

The motivation for removing fixed shards is that fixes_shards break the account ids contiguity within a shard. A sub-account of a fixed shard is assigned to the same shard as the fixed account but it may fall in the middle of account range of another shard. For example let's consider the following shard layout:
```
fixed_shards = ['fixed']
boundary_accounts = ['middle']
```
In this shard layout we have three shards: 
- 0 - 'fixed' and all sub-accounts
- 1 - [..'middle'] with exception for any sub-account of fixed
- 2 - ['middle'..] with exception for any sub-account of fixed

Because of the fixed account, shards 1 and 2 are now full of holes for any subaccounts of 'fixed'. This property of fixed_shards makes it hard to perform any range operations on a shard such as resharding or deletion. 

The proper way to do it would be to add ShardLayoutV2 - a copy of V1 with fixed_shards removed and version bumped. Due to the way that we use for storing state date in storage - using the shard_uid as a prefix for the storage key - this approach would require a careful migration. One way to go about it would be to perform a null-resharding in epoch preceeding the shard layout version bump. There the node would have to create a copy of the state and store it in storage with the new version in shard_uid. Once the state is duplicated the node can keep applying changes to both states, same as in resharding. Such a migration would need to be prepared very carefully and would take a lot of time and effort. 

The reason why I believe that we can get away without new shard layout version is that fixed_shards are not used in production and the shard version is not actually stored in the state. The ShardLayoutV1 with empty fixed_shards is semantically equivalent to ShardLayoutV1 with the fixed_shards field removed. 

It's worth mentioning that the migration pains would be removed if we decided to change the storage structure to one that doesn't have the shard_uid in the storage key.

* remove near-cache dep from near-vm-runner (#9273)

Part of #8197

* fix(state-sync): Clear flat storage outside of ClientActor (#9268)

Deleting multiple GBs from RocksDB is definitely taking more than 100ms.
In case a node catches up, it can potentially make a chunk producer or a block producer miss its chunk/block.

This PR moves deletion of Flat State to `SyncJobsActor` to avoid blocking `ClientActor`.

* remove near-stable-hasher dep from near-vm-runner (#9270)

Part of #8197

* fix(state-sync): Applied values from state parts should be inlined when applicable (#9265)

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* clippy

* fmt

* fix(state-sync): Finalize state sync using flat storage

* fix(state-sync): Finalize state sync using flat storage

---------

Co-authored-by: Anton Puhach <anton@near.org>
Co-authored-by: Jure Bajic <jure@near.org>
Co-authored-by: wacban <wacban@users.noreply.github.com>
Co-authored-by: Ekleog-NEAR <96595974+Ekleog-NEAR@users.noreply.github.com>
Co-authored-by: near-bulldozer[bot] <73298989+near-bulldozer[bot]@users.noreply.github.com>
nikurt added a commit that referenced this pull request Jul 12, 2023
* fix(state-sync): finalize using flat storage

* fix(state-sync): Applied values from state parts should be inlined when applicable (#9265)

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* clippy

* fmt

* logging

* scan from a hash

* refactor: add FlatStateValue constructor for on-disk values (#9269)

This logic is used in multiple places, so it makes sense to abstract it behind a method in `FlatStateValue`.
`on_disk` name is chosen over `new` since this logic is only applicable for values to be persisted on disk. Later we will also add inlining for values as part of Flat Storage Delta and those will have lower inlining threshold, so we will have another constructor.

* refactor: Move adjust-db tool to database tool (#9264)

Refactoring of `adjust-db` tool, moving it to the `database` toolset where there's of database tools should live.

* feat: remove fixed shards (#9219)

In this PR I'm removing the fixed shards field from the shard layout. I removed the fixed_shards field directly from the ShardLayoutV1. This implementation is quite hacky as it does not follow the typical flow for making protocol changes. This approach needs to be discussed and agreed on first but here is an implementation just to get a feel for it. 

The motivation for removing fixed shards is that fixes_shards break the account ids contiguity within a shard. A sub-account of a fixed shard is assigned to the same shard as the fixed account but it may fall in the middle of account range of another shard. For example let's consider the following shard layout:
```
fixed_shards = ['fixed']
boundary_accounts = ['middle']
```
In this shard layout we have three shards: 
- 0 - 'fixed' and all sub-accounts
- 1 - [..'middle'] with exception for any sub-account of fixed
- 2 - ['middle'..] with exception for any sub-account of fixed

Because of the fixed account, shards 1 and 2 are now full of holes for any subaccounts of 'fixed'. This property of fixed_shards makes it hard to perform any range operations on a shard such as resharding or deletion. 

The proper way to do it would be to add ShardLayoutV2 - a copy of V1 with fixed_shards removed and version bumped. Due to the way that we use for storing state date in storage - using the shard_uid as a prefix for the storage key - this approach would require a careful migration. One way to go about it would be to perform a null-resharding in epoch preceeding the shard layout version bump. There the node would have to create a copy of the state and store it in storage with the new version in shard_uid. Once the state is duplicated the node can keep applying changes to both states, same as in resharding. Such a migration would need to be prepared very carefully and would take a lot of time and effort. 

The reason why I believe that we can get away without new shard layout version is that fixed_shards are not used in production and the shard version is not actually stored in the state. The ShardLayoutV1 with empty fixed_shards is semantically equivalent to ShardLayoutV1 with the fixed_shards field removed. 

It's worth mentioning that the migration pains would be removed if we decided to change the storage structure to one that doesn't have the shard_uid in the storage key.

* remove near-cache dep from near-vm-runner (#9273)

Part of #8197

* fix(state-sync): Clear flat storage outside of ClientActor (#9268)

Deleting multiple GBs from RocksDB is definitely taking more than 100ms.
In case a node catches up, it can potentially make a chunk producer or a block producer miss its chunk/block.

This PR moves deletion of Flat State to `SyncJobsActor` to avoid blocking `ClientActor`.

* remove near-stable-hasher dep from near-vm-runner (#9270)

Part of #8197

* fix(state-sync): Applied values from state parts should be inlined when applicable (#9265)

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* fix(state-sync): Applied values from state parts should be inlined when applicable

* clippy

* fmt

* fix(state-sync): Finalize state sync using flat storage

* fix(state-sync): Finalize state sync using flat storage

---------

Co-authored-by: Anton Puhach <anton@near.org>
Co-authored-by: Jure Bajic <jure@near.org>
Co-authored-by: wacban <wacban@users.noreply.github.com>
Co-authored-by: Ekleog-NEAR <96595974+Ekleog-NEAR@users.noreply.github.com>
Co-authored-by: near-bulldozer[bot] <73298989+near-bulldozer[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-storage Area: storage and databases
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants