Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bound finalized header, execution header and sync committee by RingBuffer and BoundedVec #815

Merged
merged 8 commits into from
May 10, 2023

Conversation

yrong
Copy link
Contributor

@yrong yrong commented Apr 27, 2023

Based on #814 from @ParthDesai
Companion for cumulus: Snowfork/cumulus#20

Fixes: SNO-328

@yrong yrong changed the title Add ringbuffer Bound finalized header, execution header and sync committee by RingBuffer and BoundedVec Apr 27, 2023
@yrong yrong marked this pull request as ready for review April 27, 2023 15:56
@vgeddes
Copy link
Collaborator

vgeddes commented Apr 27, 2023

Looks good! The implementation seems sound. Will give a more thorough review next week.

@vgeddes
Copy link
Collaborator

vgeddes commented Apr 27, 2023

@yrong can you review #816, and if you agree it is good, then we can rebase this PR on top of it?

Copy link
Collaborator

@vgeddes vgeddes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good design ring buffer design, but I think we can do even better to optimize the overhead of the implementation.

It seems the mappings ExecutionHeaderMapping and SyncCommitteesMapping simply contain pointers to next inserted node.

So lets consider storing the next pointers with the actual nodes.

See this rough example which explains it better:

struct RingBufferNode<K, V> {
    value: V,
    next: Option<K>,
}

struct RingBufferCursor<K> {
    // last inserted node
    node: K,
    // number of nodes in ring buffer
    count: u32
}

struct RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;

impl<K, V, Nodes, Cursor> RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;
{
    fn insert(key: K, value: V) {
        // handle all cases here, much like a circular singly linked list    
    }
}

// Ring buffer nodes
#[pallet::storage]
pub(super) type SyncCommittees<T: Config> = StorageMap<_, Identity, H256, RingBufferNode<H256, SyncCommitteeOf<T>>, ValueQuery>;

// Ring buffer cursor
#[pallet::storage]
pub(crate) type SyncCommitteesCursor<T: Config> = StorageValue<_, RingBufferCursor<H256>, ValueQuery>;

With this design, we reduce the number of storage writes from 3 to 2, so uses less weight.

Let me know what you think.

@vgeddes
Copy link
Collaborator

vgeddes commented Apr 28, 2023

This is a good design ring buffer design, but I think we can do even better to optimize the overhead of the implementation.

It seems the mappings ExecutionHeaderMapping and SyncCommitteesMapping simply contain pointers to next inserted node.

So lets consider storing the next pointers with the actual nodes.

See this rough example which explains it better:

struct RingBufferNode<K, V> {
    value: V,
    next: Option<K>,
}

struct RingBufferCursor<K> {
    // last inserted node
    node: K,
    // number of nodes in ring buffer
    count: u32
}

struct RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;

impl<K, V, Nodes, Cursor> RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;
{
    fn insert(key: K, value: V) {
        // handle all cases here, much like a circular singly linked list    
    }
}

// Ring buffer nodes
#[pallet::storage]
pub(super) type SyncCommittees<T: Config> = StorageMap<_, Identity, H256, RingBufferNode<H256, SyncCommitteeOf<T>>, ValueQuery>;

// Ring buffer cursor
#[pallet::storage]
pub(crate) type SyncCommitteesCursor<T: Config> = StorageValue<_, RingBufferCursor<H256>, ValueQuery>;

With this design, we reduce the number of storage writes from 3 to 2, so uses less weight.

Let me know what you think.

Actually, just realised this idea will have worse performance, since every time we need to update the node.next pointers, we will end up writing sync committees to storage redundantly. Ignore this idea

@ParthDesai
Copy link
Contributor

ParthDesai commented Apr 28, 2023

This is a good design ring buffer design, but I think we can do even better to optimize the overhead of the implementation.

It seems the mappings ExecutionHeaderMapping and SyncCommitteesMapping simply contain pointers to next inserted node.

So lets consider storing the next pointers with the actual nodes.

See this rough example which explains it better:

struct RingBufferNode<K, V> {
    value: V,
    next: Option<K>,
}

struct RingBufferCursor<K> {
    // last inserted node
    node: K,
    // number of nodes in ring buffer
    count: u32
}

struct RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;

impl<K, V, Nodes, Cursor> RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;
{
    fn insert(key: K, value: V) {
        // handle all cases here, much like a circular singly linked list    
    }
}

// Ring buffer nodes
#[pallet::storage]
pub(super) type SyncCommittees<T: Config> = StorageMap<_, Identity, H256, RingBufferNode<H256, SyncCommitteeOf<T>>, ValueQuery>;

// Ring buffer cursor
#[pallet::storage]
pub(crate) type SyncCommitteesCursor<T: Config> = StorageValue<_, RingBufferCursor<H256>, ValueQuery>;

With this design, we reduce the number of storage writes from 3 to 2, so uses less weight.

Let me know what you think.

This is a good design ring buffer design, but I think we can do even better to optimize the overhead of the implementation.
It seems the mappings ExecutionHeaderMapping and SyncCommitteesMapping simply contain pointers to next inserted node.
So lets consider storing the next pointers with the actual nodes.
See this rough example which explains it better:

struct RingBufferNode<K, V> {
    value: V,
    next: Option<K>,
}

struct RingBufferCursor<K> {
    // last inserted node
    node: K,
    // number of nodes in ring buffer
    count: u32
}

struct RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;

impl<K, V, Nodes, Cursor> RingBufferTransient<K, V, Nodes, Cursor> 
where
    Nodes: StorageMap<K, RingBufferNode<K, V>>,
    Cursor: StorageValue<RingBufferCursor<K>>;
{
    fn insert(key: K, value: V) {
        // handle all cases here, much like a circular singly linked list    
    }
}

// Ring buffer nodes
#[pallet::storage]
pub(super) type SyncCommittees<T: Config> = StorageMap<_, Identity, H256, RingBufferNode<H256, SyncCommitteeOf<T>>, ValueQuery>;

// Ring buffer cursor
#[pallet::storage]
pub(crate) type SyncCommitteesCursor<T: Config> = StorageValue<_, RingBufferCursor<H256>, ValueQuery>;

With this design, we reduce the number of storage writes from 3 to 2, so uses less weight.
Let me know what you think.

Actually, just realised this idea will have worse performance, since every time we need to update the node.next pointers, we will end up writing sync committees to storage redundantly. Ignore this idea

Hmm. Make sense. One other concern is implemeting RingBuffer should be internal detail of the pallet. If we expose the structure via Storage, client can make assumptions on that which might not be good idea as if we modify the RingBuffer implementation later on then we will have to do storage migration.

Copy link
Contributor

@ParthDesai ParthDesai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall make sense to me. Though, some unrelated changes are from the main, which I haven't reviewed as I do not have enough context for those.

@ParthDesai
Copy link
Contributor

@yrong Is it possible to merge this PR now?

@yrong
Copy link
Contributor Author

yrong commented May 9, 2023

@yrong Is it possible to merge this PR now?

Yes, almost done from my side but will defer to @vgeddes if any other concern.

parachain/pallets/ethereum-beacon-client/src/lib.rs Outdated Show resolved Hide resolved
parachain/pallets/ethereum-beacon-client/src/lib.rs Outdated Show resolved Hide resolved
parachain/pallets/ethereum-beacon-client/src/lib.rs Outdated Show resolved Hide resolved
@yrong yrong merged commit 50b8306 into main May 10, 2023
@yrong yrong deleted the add-ringbuffer branch May 10, 2023 04:40
@vgeddes
Copy link
Collaborator

vgeddes commented May 10, 2023

@ParthDesai Note, you need at least rust 1.70 (nightly) to successfully compile the light client in all modes that are required (debug, release, etc).

Older versions of Rust struggle: paritytech/substrate#14075

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants