-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit size of requests with transfer-encoding: chunked
#853
Comments
Hyper wont read more than the Edit: Such as |
Hmm, I dunno. I think I found where it enforces that in the HTTP/1 implementation: https://github.com/hyperium/hyper/blob/master/src/proto/h1/decode.rs#L115 There's a couple of problems I see here:
|
Hyper capping it seems fine to me.
If I understand correctly that sounds like a bug.
Also sounds like a bug to me.
I suppose it's document in the http spec 😛 not sure if it's in hypers docs but I've heard @seanmonstar mention it many times so it's safe to rely on. I believe it's also mentioned in hypers "Vision" doc PR.
If you don't require a content length or use Imo this is something hyper should handle. It feels very "transport level" to me 🤔 Cc @seanmonstar |
The whole time I've been talking about being able to place limits on impls that collect the request body to a heap allocation before processing it, so The idea is that they would be guaranteed to bail with an error when they reached a limit instead of relying on the underlying transport to stop reading the body early. |
If you're processing the request body in a streaming fashion then you can enforce the limits yourself (if it makes sense to) but most request handlers out there are going to be using one of the above types to collect or deserialize the request body to a container instead. |
I must be misunderstanding still. Feels like we are going in a circle, because that is what If that's not what you mean can you provide an example? |
Except I would prefer not to require a |
Are requests without a |
|
If you make a request like this: use std::{net::TcpStream, io::Write};
fn main() {
let mut stream = TcpStream::connect("0.0.0.0:3000").unwrap();
stream.write_all(b"POST / HTTP/1.1\r\n").unwrap();
// no content-length and no chunked encoding
stream.write_all(b"\r\n").unwrap();
stream.write_all(b"body...").unwrap();
stream.write_all(b"\r\n").unwrap();
} The body is empty. This matches what I found here. |
Yeah, without a chunked encoding or a
HTTP/2 also does not require |
Why didn't you just say that 😅 or maybe you did and I didn't catch it. I'm not aware of how to limit lengths of chunked requests/streams (which are the same from a user of hyper's perspective afaik). @seanmonstar any advice? Edit: To me this still kinda feels something that should be configurable in hyper. All the data flows through it after all 🤷
Do you have reference for that? The existence of |
I'm also getting that "going in circles" feeling because I explained already what I want it to do: |
You didn't mention not wanting to require content lengths there. |
I would prefer not to in any case. |
transfer-encoding: chunked
Rejecting a request with a chunked body does not make the server non-compliant. It's perfectly a reasonable thing to do. (And both the HTTP/1 and If you want to limit how many bytes are streamed with a chunked request body, I think the best place to do that is a layer or at least a pre-condition to a handler that wraps the request body in some |
We are working on adding a See hyperium/http-body#60 and hyperium/http-body#61 So I'll close this now since nothing needs to be added directly to axum. Though we can probably improve |
Feature Request
Motivation
While revisiting the following Reddit thread, I came across the conversation I had with LukeMathWalker (author of Zero2Prod) about input validation: https://www.reddit.com/r/rust/comments/shetb1/_/hv3ch7n
It occurred to me that Axum may not be putting any limits on request bodies, and looking through the source, I'm pretty sure that is the case.
While application-specific input validation is still important, it's something you have to remember to do, and even if you do think to do it, e.g. with domain types like Luke suggested, the
Json
extractor will still read the whole request into memory before attempting to deserialize.This means it's relatively trivial to DOS an Axum-based server by running it out of memory. Bandwidth may not necessarily be a limiting factor, either: if the attacker gets a cloud instance running in the same datacenter as your application, they will have more or less an unbounded pipe to shove junk data into your application as fast as possible.
Actix Web has built-in mitigation for this attack, as its version of the
Bytes
extractor, and thus all the extractors built on it, will limit the body to 256 KiB by default.I did see the
ContentLengthLimit
extractor but that literally only checks theContent-Length
header which can easily be forged by an attacker.Proposal
I'm not sure of an acceptable solution here. In that same Reddit thread you explained why you don't like how Actix Web configures its extractors, like for setting the size limit in this case.
It could be a const generic parameter but the
Bytes
type isn't defined in the crate so a new type would have to be introduced to add it. Ideally this would apply to existing code as well, but that would require changing the behavior of theBytes
extractor. It could also break routes that normally expect bigger bodies than the default we choose.I'm thinking of an extension layer that does the same work as the
Bytes
extractor but enforces the given limit, and then sets the extracted body as a request extension that theBytes
extractor checks first before anything else. That's kind of the same "action at a distance" that you said you don't like, though.Alternatives
Don't limit request bodies, basically the status quo.
The text was updated successfully, but these errors were encountered: