Skip to content

Conversation

ltitanb
Copy link
Collaborator

@ltitanb ltitanb commented Oct 23, 2024

Based on feedback from Sigma Prime auditors:

  • we should load the response body by chunks to avoid loading everything in memory at once
  • adjust max limits

@ltitanb ltitanb requested a review from fbrv October 23, 2024 11:06
let mut stream = res.bytes_stream();
let mut response_bytes = Vec::new();

while let Some(chunk) = stream.next().await {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wonder if we can do something like so we avoid variables

pub async fn read_chunked_body_with_max(
    res: Response,
    max_size: usize,
) -> Result<Vec<u8>, PbsError> {
    let response_bytes = res
        .bytes_stream()
        .map_err(|e| PbsError::Reqwest(e))
        .try_fold(Vec::new(), |mut acc, chunk| async move {
            if acc.len() + chunk.len() > max_size {
                Err(PbsError::PayloadTooLarge {
                    max: max_size,
                    raw: String::from_utf8_lossy(&acc).into_owned(),
                })
            } else {
                acc.extend_from_slice(&chunk);
                Ok(acc)
            }
        })
        .await?;

    Ok(response_bytes)
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think this could work but not sure it's more readable?

@ltitanb ltitanb merged commit a293bdb into main Oct 24, 2024
5 checks passed
@ltitanb ltitanb deleted the ltitanb/fix-max-payload branch October 24, 2024 11:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants