Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Consumed all prefetched bytes" when using Google Chrome with tls 1.3 hybridized kyber #250

Closed
Monviech opened this issue Oct 2, 2024 · 2 comments

Comments

@Monviech
Copy link

Monviech commented Oct 2, 2024

ea27408

Even after patching it with this commit, I still get the errors about consumed all prefetched bytes.

2024-10-02T08:08:55	Error	caddy	"debug","ts":"2024-10-02T06:08:55Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:63aa:49bb:da64:f9ee:2bd8]:51661","error":"consumed all prefetched bytes","matcher":"layer4.matchers.tls","matched":false}	
2024-10-02T08:08:55	Debug	caddy	"debug","ts":"2024-10-02T06:08:55Z","logger":"caddy.listeners.layer4","msg":"prefetched","remote":"[2003:a:1704:63aa:49bb:da64:f9ee:2bd8]:51661","bytes":1432}	
2024-10-02T08:08:55	Error	caddy	"debug","ts":"2024-10-02T06:08:55Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:63aa:49bb:da64:f9ee:2bd8]:51661","error":"consumed all prefetched bytes","matcher":"layer4.matchers.tls","matched":false}

I have asked ChatGPT (yeah I know) to give me a solution, and it came up with this:

diff --git a/layer4/connection.go b/layer4/connection.go
index abcdefg..hijklmn 100644
--- a/layer4/connection.go
+++ b/layer4/connection.go
@@ -101,13 +101,30 @@ func (cx *Connection) Write(p []byte) (n int, err error) {
 	return
 }
 
-// prefetch tries to read all bytes that a client initially sent us without blocking.
+// prefetch tries to read all bytes that a client initially sent us without blocking,
+// up to MaxMatchingBytes. It reads in multiple chunks if necessary.
 func (cx *Connection) prefetch() (err error) {
-	var n int
-
-	// read once
-	if len(cx.buf) < MaxMatchingBytes {
-		free := cap(cx.buf) - len(cx.buf)
+    var n int
+
+    // Set a read deadline to prevent indefinite blocking
+    deadline := time.Now().Add(100 * time.Millisecond) // Adjust as needed
+    cx.Conn.SetReadDeadline(deadline)
+    defer cx.Conn.SetReadDeadline(time.Time{}) // Reset the deadline after prefetching
+
+    for len(cx.buf) < MaxMatchingBytes {
+        free := cap(cx.buf) - len(cx.buf)
+        if free >= prefetchChunkSize {
+            n, err = cx.Conn.Read(cx.buf[len(cx.buf) : len(cx.buf)+prefetchChunkSize])
+            cx.buf = cx.buf[:len(cx.buf)+n]
+        } else {
+            var tmp []byte
+            tmp = bufPool.Get().([]byte)
+            tmp = tmp[:prefetchChunkSize]
+            defer bufPool.Put(tmp)
+
+            n, err = cx.Conn.Read(tmp)
+            cx.buf = append(cx.buf, tmp[:n]...)
+        }

 		cx.bytesRead += uint64(n)

@@ -117,8 +134,22 @@ func (cx *Connection) prefetch() (err error) {
 			return err
 		}

-		if cx.Logger.Core().Enabled(zap.DebugLevel) {
-			cx.Logger.Debug("prefetched",
+        if n == 0 {
+            // No more data was read; exit the loop
+            break
+        }
+
+        if cx.Logger.Core().Enabled(zap.DebugLevel) {
+            cx.Logger.Debug("prefetched",
 				zap.String("remote", cx.RemoteAddr().String()),
 				zap.Int("bytes", len(cx.buf)),
 			)
 		}

 		return nil
-	}
+    }
+
+    if len(cx.buf) >= MaxMatchingBytes {
+        return ErrMatchingBufferFull
+    }

 	return nil
 }
+
+// isTemporaryError checks whether the error is temporary.
+func isTemporaryError(err error) bool {
+    netErr, ok := err.(net.Error)
+    return ok && netErr.Temporary()
 }

Key changes:

1.	Loop added to read in chunks until the buffer reaches MaxMatchingBytes.
2.	Read deadline added to prevent indefinite blocking on reads.
3.	New helper function isTemporaryError() added to handle temporary network errors.

After using that patch it suddenly started working for me. I do not know why, I just want to open this issue here so that it can be tracked better.

EDIT:

Heres the logs what happens after this patch:

2024-10-02T19:43:46 | Debug | caddy | "debug","ts":"2024-10-02T17:43:46Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:6398:9dc9:27a0:1693:306e]:53689","matcher":"layer4.matchers.tls","matched":true} |  
-- | -- | -- | -- | --
2024-10-02T19:43:46 | Debug | caddy | "debug","ts":"2024-10-02T17:43:46Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:6398:9dc9:27a0:1693:306e]:53689","matcher":"layer4.matchers.tls","matched":false} |  
2024-10-02T19:43:46 | Debug | caddy | "debug","ts":"2024-10-02T17:43:46Z","logger":"caddy.listeners.layer4","msg":"prefetched","remote":"[2003:a:1704:6398:9dc9:27a0:1693:306e]:53689","bytes":1839} |  
2024-10-02T19:43:46 | Debug | caddy | "debug","ts":"2024-10-02T17:43:46Z","logger":"caddy.listeners.layer4","msg":"prefetched","remote":"[2003:a:1704:6398:9dc9:27a0:1693:306e]:53689","bytes":1420} |  
2024-10-02T19:43:46 | Error | caddy | "debug","ts":"2024-10-02T17:43:46Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:6398:9dc9:27a0:1693:306e]:53689","error":"consumed all prefetched bytes","matcher":"layer4.matchers.tls","matched":false}

It fetches bytes two times before it matches, first with 1420, and then with 1839.

@Monviech
Copy link
Author

Monviech commented Oct 3, 2024

Adding some troubleshooting I did with the help of @elee1766 (thanks)

  • It happens on FreeBSD because it seems FreeBSD is too fast in some way
  • I could not reproduce it with the same configuration on Linux (Debian)

Adding an arbitrary sleep at this spot (e.g. 50ms) also fixed it for FreeBSD:

// read once
if len(cx.buf) < MaxMatchingBytes {

It would be really nice if something here can be patched, since I'm maintaining a port of this for the OPNsense (right now FreeBSD 14), and the TLS matcher just stopped working here for Chromium based browsers (Chrome, Edge...). (I know of one more user report: https://forum.opnsense.org/index.php?topic=42955.0)

@Monviech
Copy link
Author

Fixed for me with these changes: opnsense/plugins#4323

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant