Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use bytes from unrecorded_blocks rather from the block from DA #2252

Merged
merged 22 commits into from
Oct 2, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
ffa4fdb
Fix DA tests to expect unrecorded blocks to exist and to take the byt…
MitchTurner Sep 25, 2024
e82170c
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner Sep 25, 2024
fc3cf91
Remove regression for unimportant failure mode
MitchTurner Sep 25, 2024
8cd972f
Move struct that is only used in tests
MitchTurner Sep 25, 2024
8fd6616
Remove unused struct
MitchTurner Sep 26, 2024
49e521e
Change interface of `update_da_record_data`
MitchTurner Sep 26, 2024
cc28922
Move `RecordedBlock` into tests
MitchTurner Sep 26, 2024
e00319c
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner Sep 26, 2024
93a6e98
Use BTreeMap instead of HashMap
MitchTurner Sep 26, 2024
41c26f0
Use `pop_first` instead of `remove`
MitchTurner Sep 26, 2024
364ce4a
Fix compilation errors
MitchTurner Sep 26, 2024
e40a857
Kinda fix the analyzer I think
MitchTurner Sep 26, 2024
ea5469f
Cleanup prints
MitchTurner Sep 26, 2024
6a845ba
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner Sep 27, 2024
bfafd03
Fix test compilation
MitchTurner Sep 27, 2024
97fcc32
Fix profit chart length
MitchTurner Sep 27, 2024
a51a32c
Remove normalization function to fix simulation
MitchTurner Sep 27, 2024
dd8a252
Remove comment, add todo
MitchTurner Sep 30, 2024
7880500
Merge branch 'master' into fix/use-bytes-from-l2-blocks-not-da
MitchTurner Oct 1, 2024
76b904a
Merge remote-tracking branch 'origin' into fix/use-bytes-from-l2-bloc…
MitchTurner Oct 2, 2024
5848462
clean up function signature
MitchTurner Oct 2, 2024
8ec9c18
revert file inclusion from botched merge
MitchTurner Oct 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 18 additions & 26 deletions crates/fuel-gas-price-algorithm/src/v1.rs
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
use crate::utils::cumulative_percentage_change;
use std::{
cmp::max,
collections::HashMap,
num::NonZeroU64,
ops::Div,
};

use crate::utils::cumulative_percentage_change;

#[cfg(test)]
mod tests;

Expand All @@ -19,6 +19,8 @@ pub enum Error {
CouldNotCalculateCostPerByte { bytes: u64, cost: u64 },
#[error("Failed to include L2 block data: {0}")]
FailedTooIncludeL2BlockData(String),
#[error("L2 block expected but not found in unrecorded blocks: {0}")]
L2BlockExpectedNotFound(u32),
}

#[derive(Debug, Clone, PartialEq)]
Expand Down Expand Up @@ -94,6 +96,8 @@ impl AlgorithmV1 {
/// The DA portion also uses a moving average of the profits over the last `avg_window` blocks
/// instead of the actual profit. Setting the `avg_window` to 1 will effectively disable the
/// moving average.
type Height = u32;
type Bytes = u64;
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, PartialEq)]
pub struct AlgorithmUpdaterV1 {
// Execution
Expand Down Expand Up @@ -144,8 +148,9 @@ pub struct AlgorithmUpdaterV1 {
pub second_to_last_profit: i128,
/// The latest known cost per byte for recording blocks on the DA chain
pub latest_da_cost_per_byte: u128,

/// The unrecorded blocks that are used to calculate the projected cost of recording blocks
pub unrecorded_blocks: Vec<BlockBytes>,
pub unrecorded_blocks: HashMap<Height, Bytes>,
}

/// A value that represents a value between 0 and 100. Higher values are clamped to 100
Expand Down Expand Up @@ -179,23 +184,17 @@ impl core::ops::Deref for ClampedPercentage {
#[derive(Debug, Clone)]
pub struct RecordedBlock {
pub height: u32,
pub block_bytes: u64,
// pub block_bytes: u64,
pub block_cost: u64,
}

#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, PartialEq)]
pub struct BlockBytes {
pub height: u32,
pub block_bytes: u64,
}

impl AlgorithmUpdaterV1 {
pub fn update_da_record_data(
&mut self,
blocks: &[RecordedBlock],
) -> Result<(), Error> {
for block in blocks {
self.da_block_update(block.height, block.block_bytes, block.block_cost)?;
self.da_block_update(block.height, block.block_cost)?;
}
self.recalculate_projected_cost();
self.normalize_rewards_and_costs();
Expand Down Expand Up @@ -234,10 +233,7 @@ impl AlgorithmUpdaterV1 {
self.update_da_gas_price();

// metadata
self.unrecorded_blocks.push(BlockBytes {
height,
block_bytes,
});
self.unrecorded_blocks.insert(height, block_bytes);
Ok(())
}
}
Expand Down Expand Up @@ -372,19 +368,18 @@ impl AlgorithmUpdaterV1 {
.saturating_div(100)
}

fn da_block_update(
&mut self,
height: u32,
block_bytes: u64,
block_cost: u64,
) -> Result<(), Error> {
fn da_block_update(&mut self, height: u32, block_cost: u64) -> Result<(), Error> {
let expected = self.da_recorded_block_height.saturating_add(1);
if height != expected {
Err(Error::SkippedDABlock {
expected: self.da_recorded_block_height.saturating_add(1),
got: height,
})
} else {
let block_bytes = self
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we want to remove all blocks until height?

Copy link
Contributor

@rafal-ch rafal-ch Sep 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering, is it possible in a sunny-day scenario that we'll have some "lost", unrecorded blocks that will never be consumed by da_block_update()? I think yes, because update_da_record_data() gets an arbitrary set of blocks and we don't guarantee what are the heights of those blocks.

Maybe SkippedL2Block and SkippedDABlock errors protects us from this.

Anyway, leaving this comment for consideration as we might think about protecting unrecorded_blocks from growing indefinitely in case of some unexpected flow.

Edit:
I think that at some point we might need to take care about the size of unrecorded_blocks. It may happen that the user of the algorithm will populate the set by calling update_l2_block_data(), but will never call into update_da_record_data() to clear it. Maybe this is enforced on a higher level.
cc: @MitchTurner

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we want to remove all blocks until height?

That might be more performant (and kinda what we were doing before). We'd want to do that before we called da_block_update and pair each block_bytes with the corresponding RecordedBlock.

I honestly don't know the performance of remove for HashMap. get is O(1), but obviously doesn't mut so remove probably does re-scaling and other heap garbage. The most performant would be a VecDeque and split_off probably? The order is now an issue then. I was going to say we don't need to include the height (just the bytes) if we trust the order, but just in case we probably should and throw an error if it doesn't match the recorded_block it is paired with.

Copy link
Member Author

@MitchTurner MitchTurner Sep 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway, leaving this comment for consideration as we might think about protecting unrecorded_blocks from growing indefinitely in case of some unexpected flow.

Yeah. It's directly relevant to what we're talking about. We definitely make assumption about the order.

Talking to the rollup team guys, it sounds like they might not want to guarantee order in the long run, if that's the case then the HashMap approach might be the best. We could even get rid of SkippedL2Block I think... except we've lost info on the best cost_for_byte for L2 blocks that aren't in order.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant: Block committer submits blocks in the bundles. So bundle can have several blocks inside. When you receive the notification from the block commuter about DA submission, you can have unrecorded_blocks = vec![block_height-5, block_height-4, ... block_height]. Then instead of removing only one entry, you need to remove all entries up to block_height.

If that is something that we want, then BTreeMap is better=)

Copy link
Member Author

@MitchTurner MitchTurner Sep 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are currently removing all in the bundle. We are just iterating over the recorded blocks and removing one at a time:

    pub fn update_da_record_data(
        &mut self,
        blocks: &[RecordedBlock],
    ) -> Result<(), Error> {
        for block in blocks {
            self.da_block_update(block.height, block.block_cost)?;
        }
        self.recalculate_projected_cost();
        self.normalize_rewards_and_costs();
        Ok(())
    }

What I was saying is we could get them all first in some efficient way:

    pub fn update_da_record_data(
        &mut self,
        blocks: &[RecordedBlock],
    ) -> Result<(), Error> {
        let bytes = self.remove_bytes_for_recorded_blocks(&blocks)?;
        for (block, bytes) in blocks.iter().zip(bytes) {
            self.da_block_update(block.height, block.block_cost, bytes)?;
        }
        self.recalculate_projected_cost();
        self.normalize_rewards_and_costs();
        Ok(())
    }

Yeah maybe BTreeMap makes the most sense for the ordered case.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RecordedBlock data is linked to each other. It just has the latest block height

cc @rymnc Since he wrote down the interface that we agreed with Rollup team. Maybe I remember it incorrectly)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I was part of those conversations and we've discussed what is required for the algo.

Copy link
Member

@rymnc rymnc Sep 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what I have -

struct Bundle {
	sequence_number: u64,
	blocks_range: Range<BlockHeight>,
	// The DA block height of the last transaciton in the bundle.
	da_block_height: DaBlockHeight,
	// Total cost of all bundles for the whole history.
	total_cost: u256,
	// Total size of all bundles for the whole history.
	total_size: u256,
}

trait CommitterAPI {
	fn get_n_last_bundle(&self, number: u64) -> Result<Bundle>;

	// Range is based on `sequence_number`
	fn get_bundles_by_range(&self, range: Range<u64>) -> Result<Vec<Bundle>>;
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Okay. So in that case we can just sum all the block bytes for that range and do a single cost calculation. Sounds like maybe that's what you were suggesting, @xgreenx . I was still under the impression that more processing would happen on the service side, but this actually works fine.

.unrecorded_blocks
.remove(&height)
.ok_or(Error::L2BlockExpectedNotFound(height))?;
let new_cost_per_byte: u128 = (block_cost as u128)
.checked_div(block_bytes as u128)
.ok_or(Error::CouldNotCalculateCostPerByte {
Expand All @@ -402,15 +397,12 @@ impl AlgorithmUpdaterV1 {
}

fn recalculate_projected_cost(&mut self) {
// remove all blocks that have been recorded
self.unrecorded_blocks
.retain(|block| block.height > self.da_recorded_block_height);
// add the cost of the remaining blocks
let projection_portion: u128 = self
.unrecorded_blocks
.iter()
.map(|block| {
(block.block_bytes as u128).saturating_mul(self.latest_da_cost_per_byte)
.map(|(_, &bytes)| {
(bytes as u128).saturating_mul(self.latest_da_cost_per_byte)
})
.sum();
self.projected_total_da_cost = self
Expand Down
17 changes: 12 additions & 5 deletions crates/fuel-gas-price-algorithm/src/v1/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,7 @@
#![allow(clippy::arithmetic_side_effects)]
#![allow(clippy::cast_possible_truncation)]

use crate::v1::{
AlgorithmUpdaterV1,
BlockBytes,
};
use crate::v1::AlgorithmUpdaterV1;

#[cfg(test)]
mod algorithm_v1_tests;
Expand All @@ -14,6 +11,12 @@ mod update_da_record_data_tests;
#[cfg(test)]
mod update_l2_block_data_tests;

#[derive(Debug, Clone)]
pub struct BlockBytes {
rafal-ch marked this conversation as resolved.
Show resolved Hide resolved
pub height: u32,
pub block_bytes: u64,
}

pub struct UpdaterBuilder {
min_exec_gas_price: u64,
min_da_gas_price: u64,
Expand Down Expand Up @@ -175,7 +178,11 @@ impl UpdaterBuilder {
latest_da_cost_per_byte: self.da_cost_per_byte,
projected_total_da_cost: self.project_total_cost,
latest_known_total_da_cost_excess: self.latest_known_total_cost,
unrecorded_blocks: self.unrecorded_blocks,
unrecorded_blocks: self
.unrecorded_blocks
.iter()
.map(|b| (b.height, b.block_bytes))
.collect(),
last_profit: self.last_profit,
second_to_last_profit: self.second_to_last_profit,
min_da_gas_price: self.min_da_gas_price,
Expand Down
Loading
Loading