Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rpcdaemon sending invalid responses #1450

Closed
MysticRyuujin opened this issue Jan 22, 2021 · 4 comments
Closed

rpcdaemon sending invalid responses #1450

MysticRyuujin opened this issue Jan 22, 2021 · 4 comments

Comments

@MysticRyuujin
Copy link
Contributor

I've been wracking my brain trying to figure out why haproxy is complaining that rpcdaemon is sending invalid responses.

I've finally narrowed it down to the exact bytes that haproxy is seeing in the reply that it doesn't like...

In the below haproxy debugs you will see that it says error at position 130 which coresponds to this line 00130 d8930676\r\n and I have absolutely no idea what that is...so I'm kinda stuck not knowing how I might resolve this. Was hoping someone would have some idea what that is.

Even wireshark is telling me that the response is a malformed packet and does not parse the response correctly...

image

image

The curl command that reproduces this is quite ridiculous (the output is 3.4GB) but it's here:

curl -X POST  -H 'Content-Type: application/json' --data '{"jsonrpc":"2.0","method":"debug_traceTransaction","params":["0x7a97e47f9399b46c3e8ee444778763a4f331fa182b5c28c1ba98709721f71f7d"],"id":1}'

haproxy error debug:

Total events captured on [22/Jan/2021:18:41:25.300] : 1
 
[22/Jan/2021:18:41:20.864] backend turbogeth (#11): invalid response
  frontend http-https-in (#2), server archive04 (#2), event #0, src <redacted>:39386
  buffer starts at 0 (including 0 out), 14936 free,
  len 1448, wraps at 16336, error at position 130
  H1 connection flags 0x00000000, H1 stream flags 0x00004014
  H1 msg state MSG_CHUNK_SIZE(26), H1 msg flags 0x00001716
  H1 chunk len 0 bytes, H1 body len 0 bytes :
  
  00000  HTTP/1.1 200 OK\r\n
  00017  Content-Type: application/json\r\n
  00049  Vary: Origin\r\n
  00063  Date: Fri, 22 Jan 2021 18:41:20 GMT\r\n
  00100  Transfer-Encoding: chunked\r\n
  00128  \r\n
  00130  d8930676\r\n
  00140  {"jsonrpc":"2.0","id":1,"result":{"gas":4300872,"failed":false,"return
  00210+ Value":"","structLogs":[{"pc":0,"op":"CALLDATASIZE","gas":5810070,"gas
  00280+ Cost":2,"depth":1,"stack":[],"memory":[],"storage":{}},{"pc":1,"op":"R
  00350+ ETURNDATASIZE","gas":5810068,"gasCost":2,"depth":1,"stack":["000000000
  00420+ 0000000000000000000000000000000000000000000000000000b24"],"memory":[],
  00490+ "storage":{}},{"pc":2,"op":"RETURNDATASIZE","gas":5810066,"gasCost":2,
  00560+ "depth":1,"stack":["00000000000000000000000000000000000000000000000000
  00630+ 00000000000b24","00000000000000000000000000000000000000000000000000000
  00700+ 00000000000"],"memory":[],"storage":{}},{"pc":3,"op":"CALLDATACOPY","g
  00770+ as":5810064,"gasCost":558,"depth":1,"stack":["000000000000000000000000
  00840+ 0000000000000000000000000000000000000b24","000000000000000000000000000
  00910+ 0000000000000000000000000000000000000","000000000000000000000000000000
  00980+ 0000000000000000000000000000000000"],"memory":["0000000000000000000000
  01050+ 000000000000000000000000000000000000000000","0000000000000000000000000
  01120+ 000000000000000000000000000000000000000","0000000000000000000000000000
  01190+ 000000000000000000000000000000000000","0000000000000000000000000000000
  01260+ 000000000000000000000000000000000","0000000000000000000000000000000000
  01330+ 000000000000000000000000000000","0000000000000000000000000000000000000
  01400+ 000000000000000000000000000","000000000000000000
@MysticRyuujin
Copy link
Contributor Author

MysticRyuujin commented Jan 22, 2021

tx 0x7bc310ccc81f328b22d1acc5b2b69e4e12b6b8d5ebfaa4d927574430487b13a9 results in the same behavior but the bytes before the json body are different. From my Google searching this is the chunk size. So it seems to be perfectly valid...

I also noticed that normal transactions are not chunked. I wonder if this is specific only to responses that are chunked?

@MysticRyuujin
Copy link
Contributor Author

I'm starting to suspect that this is an haproxy issue and not an rpcdaemon issue. I'll open an issue on their forums.

@MysticRyuujin
Copy link
Contributor Author

MysticRyuujin commented Jan 23, 2021

HAProxy does not support chunk sizes larger than 2GB - I've filed a feature request to them but god only knows if they'll do it, or when...

According to some of the go forms I've read this can be avoided by simply setting the Content-Length header before sending it off to the net/http server. I don't know if that's an option here?

...
        // Check for an explicit (and valid) Content-Length header.
	hasCL := w.contentLength != -1
...
        if w.req.Method == "HEAD" || !bodyAllowedForStatus(code) {
		// do nothing
	} else if code == StatusNoContent {
		delHeader("Transfer-Encoding")
	} else if hasCL {
		delHeader("Transfer-Encoding")
	} else if w.req.ProtoAtLeast(1, 1) {
		// HTTP/1.1 or greater: Transfer-Encoding has been set to identity, and no
		// content-length has been provided. The connection must be closed after the
		// reply is written, and no chunking is to be done. This is the setup
		// recommended in the Server-Sent Events candidate recommendation 11,
		// section 8.
		if hasTE && te == "identity" {
			cw.chunking = false
			w.closeAfterReply = true
		} else {
			// HTTP/1.1 or greater: use chunked transfer encoding
			// to avoid closing the connection at EOF.
			cw.chunking = true
			setHeader.transferEncoding = "chunked"
			if hasTE && te == "chunked" {
				// We will send the chunked Transfer-Encoding header later.
				delHeader("Transfer-Encoding")
			}
		}
	} else {
		// HTTP version < 1.1: cannot do chunked transfer
		// encoding and we don't know the Content-Length so
		// signal EOF by closing connection.
		w.closeAfterReply = true
		delHeader("Transfer-Encoding") // in case already set
	}

@AlexeyAkhunov
Copy link
Contributor

I don't know if that's an option here?

Definitely an option, just need to find where to put it :)

cffls added a commit to cffls/erigon that referenced this issue Jan 6, 2025
… (erigontech#1450)

* Remove redundant writes when a state object is reverted (erigontech#21)

* Remove redundant writes when a state object is reverted

* Change IsDirty to Transaction level

We don't want a reverted transaction to show up in written trace because it was touched by a previous transaction.

* Add storage read whenever there is a sstore

This fixes an issue when a storage slot is
* written but got reverted
* never read by sLoad opcode

When this happens, we still need to include the storage slot in the trace.

* fix test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants