-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor p2p reader #3433
Refactor p2p reader #3433
Conversation
This looks great! 👍 on the overall approach. This makes a ton of sense. |
Ran a full sync locally and everything went smoothly. 👍 |
Just putting these here for my own reference as I wrap my head around this.
|
Yes, Edit: sorry I was confused on that last part, I think it would be easier to keep that one around. |
Could you merge upstream/master into your PR? |
@quentinlesceller Yep will do! Thanks for testing. |
So one for each known msg type and one unknown case? |
Yes, that is basically what I have done in the latest commit. It also allowed us to get rid of one of the enums. Also |
Testing locally. Looks good so far. |
I'm planning to cut a Do you mind if we hold off on merging this PR until after that? |
Actually I'm guessing we want to preserve the "streaming read" behavior for chunks of headers. |
Sure, no problem.
Yes, correct.
I agree. I will look into this. Since the codec is already holding state and we are decoding the messages in there, this shouldn't be too hard to do. |
We know the overall size in bytes of the full 512 header payload. The streaming reader allowed us to call Looking at this now it is conceptually close to an You mention The |
@antiochp Ok with the latest commit we now read the headers as they come in, and return them in batches of 32. This is done by overestimating the header size and performing deserialization when the buffer is filled. I also added some tests to prove that the header size calculation is actually correct. There was a nasty bug in there that took me a long time to hunt down, but I managed to fix it. The problem was that if a read failed, the reserved part of the buffer was kept around. This messed up the next read attempt. |
return Ok(Message::Headers(h)); | ||
} | ||
} | ||
Attachment(left, meta, now) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And in contrast with headers, reading an attachment returns a single Message
, basically a running total of the download? Each time with potentially more of the attachment read?
Subsequent blocking calls to read_inner()
may result in more of the attachment being read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An attachment will generate a lot of Message::Attachment
s, one for every 48000 bytes that is read. This is the same size iirc as the buffer we were filling before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Let's do it!
This PR refactors the reader of the node p2p by introducing a codec that keeps track of the read state and emits the received p2p messages. The codec only interprets the message headers, the body stay as raw
Bytes
, leaving it to theProtocol
struct to actually interpret them, as is the case now.Some advantages of the refactored code:
Protocol
struct)tokio_util::codec::Framed
trait, which simplifies reading and writing of async messages (future PR)Unfortunately afaik we currently do not have a framework to test each p2p message and interaction individually, so this PR has been tested on mainnet by syncing from scratch and keeping it running for a while.