You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Even after patching it with this commit, I still get the errors about consumed all prefetched bytes.
2024-10-02T08:08:55 Error caddy "debug","ts":"2024-10-02T06:08:55Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:63aa:49bb:da64:f9ee:2bd8]:51661","error":"consumed all prefetched bytes","matcher":"layer4.matchers.tls","matched":false}
2024-10-02T08:08:55 Debug caddy "debug","ts":"2024-10-02T06:08:55Z","logger":"caddy.listeners.layer4","msg":"prefetched","remote":"[2003:a:1704:63aa:49bb:da64:f9ee:2bd8]:51661","bytes":1432}
2024-10-02T08:08:55 Error caddy "debug","ts":"2024-10-02T06:08:55Z","logger":"caddy.listeners.layer4","msg":"matching","remote":"[2003:a:1704:63aa:49bb:da64:f9ee:2bd8]:51661","error":"consumed all prefetched bytes","matcher":"layer4.matchers.tls","matched":false}
I have asked ChatGPT (yeah I know) to give me a solution, and it came up with this:
diff --git a/layer4/connection.go b/layer4/connection.go
index abcdefg..hijklmn 100644
--- a/layer4/connection.go+++ b/layer4/connection.go@@ -101,13 +101,30 @@ func (cx *Connection) Write(p []byte) (n int, err error) {
return
}
-// prefetch tries to read all bytes that a client initially sent us without blocking.+// prefetch tries to read all bytes that a client initially sent us without blocking,+// up to MaxMatchingBytes. It reads in multiple chunks if necessary.
func (cx *Connection) prefetch() (err error) {
- var n int-- // read once- if len(cx.buf) < MaxMatchingBytes {- free := cap(cx.buf) - len(cx.buf)+ var n int++ // Set a read deadline to prevent indefinite blocking+ deadline := time.Now().Add(100 * time.Millisecond) // Adjust as needed+ cx.Conn.SetReadDeadline(deadline)+ defer cx.Conn.SetReadDeadline(time.Time{}) // Reset the deadline after prefetching++ for len(cx.buf) < MaxMatchingBytes {+ free := cap(cx.buf) - len(cx.buf)+ if free >= prefetchChunkSize {+ n, err = cx.Conn.Read(cx.buf[len(cx.buf) : len(cx.buf)+prefetchChunkSize])+ cx.buf = cx.buf[:len(cx.buf)+n]+ } else {+ var tmp []byte+ tmp = bufPool.Get().([]byte)+ tmp = tmp[:prefetchChunkSize]+ defer bufPool.Put(tmp)++ n, err = cx.Conn.Read(tmp)+ cx.buf = append(cx.buf, tmp[:n]...)+ }
cx.bytesRead += uint64(n)
@@ -117,8 +134,22 @@ func (cx *Connection) prefetch() (err error) {
return err
}
- if cx.Logger.Core().Enabled(zap.DebugLevel) {- cx.Logger.Debug("prefetched",+ if n == 0 {+ // No more data was read; exit the loop+ break+ }++ if cx.Logger.Core().Enabled(zap.DebugLevel) {+ cx.Logger.Debug("prefetched",
zap.String("remote", cx.RemoteAddr().String()),
zap.Int("bytes", len(cx.buf)),
)
}
return nil
- }+ }++ if len(cx.buf) >= MaxMatchingBytes {+ return ErrMatchingBufferFull+ }
return nil
}
++// isTemporaryError checks whether the error is temporary.+func isTemporaryError(err error) bool {+ netErr, ok := err.(net.Error)+ return ok && netErr.Temporary()
}
Key changes:
1. Loop added to read in chunks until the buffer reaches MaxMatchingBytes.
2. Read deadline added to prevent indefinite blocking on reads.
3. New helper function isTemporaryError() added to handle temporary network errors.
After using that patch it suddenly started working for me. I do not know why, I just want to open this issue here so that it can be tracked better.
It would be really nice if something here can be patched, since I'm maintaining a port of this for the OPNsense (right now FreeBSD 14), and the TLS matcher just stopped working here for Chromium based browsers (Chrome, Edge...). (I know of one more user report: https://forum.opnsense.org/index.php?topic=42955.0)
ea27408
Even after patching it with this commit, I still get the errors about consumed all prefetched bytes.
I have asked ChatGPT (yeah I know) to give me a solution, and it came up with this:
Key changes:
After using that patch it suddenly started working for me. I do not know why, I just want to open this issue here so that it can be tracked better.
EDIT:
Heres the logs what happens after this patch:
It fetches bytes two times before it matches, first with 1420, and then with 1839.
The text was updated successfully, but these errors were encountered: