-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use bytes from unrecorded_blocks
rather from the block from DA
#2252
Merged
Merged
Changes from 4 commits
Commits
Show all changes
22 commits
Select commit
Hold shift + click to select a range
ffa4fdb
Fix DA tests to expect unrecorded blocks to exist and to take the byt…
MitchTurner e82170c
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner fc3cf91
Remove regression for unimportant failure mode
MitchTurner 8cd972f
Move struct that is only used in tests
MitchTurner 8fd6616
Remove unused struct
MitchTurner 49e521e
Change interface of `update_da_record_data`
MitchTurner cc28922
Move `RecordedBlock` into tests
MitchTurner e00319c
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner 93a6e98
Use BTreeMap instead of HashMap
MitchTurner 41c26f0
Use `pop_first` instead of `remove`
MitchTurner 364ce4a
Fix compilation errors
MitchTurner e40a857
Kinda fix the analyzer I think
MitchTurner ea5469f
Cleanup prints
MitchTurner 6a845ba
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner bfafd03
Fix test compilation
MitchTurner 97fcc32
Fix profit chart length
MitchTurner a51a32c
Remove normalization function to fix simulation
MitchTurner dd8a252
Remove comment, add todo
MitchTurner 7880500
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner 76b904a
Merge remote-tracking branch 'origin' into fix/use-bytes-from-l2-bloc…
MitchTurner 5848462
clean up function signature
MitchTurner 8ec9c18
revert file inclusion from botched merge
MitchTurner File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we want to remove all blocks until
height
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering, is it possible in a sunny-day scenario that we'll have some "lost", unrecorded blocks that will never be consumed by
da_block_update()
? I think yes, becauseupdate_da_record_data()
gets an arbitrary set of blocks and we don't guarantee what are the heights of those blocks.Maybe
SkippedL2Block
andSkippedDABlock
errors protects us from this.Anyway, leaving this comment for consideration as we might think about protecting
unrecorded_blocks
from growing indefinitely in case of some unexpected flow.Edit:
I think that at some point we might need to take care about the size of
unrecorded_blocks
. It may happen that the user of the algorithm will populate the set by callingupdate_l2_block_data()
, but will never call intoupdate_da_record_data()
to clear it. Maybe this is enforced on a higher level.cc: @MitchTurner
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That might be more performant (and kinda what we were doing before). We'd want to do that before we called
da_block_update
and pair eachblock_bytes
with the correspondingRecordedBlock
.I honestly don't know the performance of
remove
forHashMap
.get
is O(1), but obviously doesn'tmut
soremove
probably does re-scaling and other heap garbage. The most performant would be aVecDeque
andsplit_off
probably? The order is now an issue then. I was going to say we don't need to include theheight
(just thebytes
) if we trust the order, but just in case we probably should and throw an error if it doesn't match therecorded_block
it is paired with.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. It's directly relevant to what we're talking about. We definitely make assumption about the order.
Talking to the rollup team guys, it sounds like they might not want to guarantee order in the long run, if that's the case then the
HashMap
approach might be the best. We could even get rid ofSkippedL2Block
I think... except we've lost info on the bestcost_for_byte
for L2 blocks that aren't in order.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant: Block committer submits blocks in the bundles. So bundle can have several blocks inside. When you receive the notification from the block commuter about DA submission, you can have
unrecorded_blocks = vec![block_height-5, block_height-4, ... block_height]
. Then instead of removing only one entry, you need to remove all entries up toblock_height
.If that is something that we want, then
BTreeMap
is better=)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are currently removing all in the bundle. We are just iterating over the recorded blocks and removing one at a time:
What I was saying is we could get them all first in some efficient way:
Yeah maybe
BTreeMap
makes the most sense for the ordered case.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RecordedBlock
data is linked to each other. It just has the latest block heightcc @rymnc Since he wrote down the interface that we agreed with Rollup team. Maybe I remember it incorrectly)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I was part of those conversations and we've discussed what is required for the algo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I have -
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Okay. So in that case we can just sum all the block bytes for that range and do a single cost calculation. Sounds like maybe that's what you were suggesting, @xgreenx . I was still under the impression that more processing would happen on the service side, but this actually works fine.