-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reply from REST can come in pieces #1035
Conversation
How could you not name this branch "rest-in-pieces"... 😢 |
b517bd2
to
bc8767f
Compare
Yeah, it takes somewhat longer to cleanup all 31 containers 🤷♂️ I will try to remove them outside of this cleanup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for spotting! Ugh, why did we use .stream() here in the first place..
src/rest.js
Outdated
try { | ||
const message = JSON.parse(buffer); | ||
buffer = ""; | ||
callback(message); | ||
} catch { | ||
// Ignore, we already stored it is buffer | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is too brittle and probabilistic. Why not just let the bridge do the buffer assembly? I suggest using the original code, but as a .then() handler, instead of .stream(). We are not expecting huge replies here, and even if they were, the stream handler can't process individual chunks anyway -- so building up a buffer happens in any case. So let's just let cockpit.js do the job.
See https://cockpit-project.org/guide/latest/cockpit-http.html#cockpit-http-then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We handle .then()
as well (each user of .monitor()
declares it on they own), but it means that the monitor has closed. We use monitor for streaming of events as they happen - we open the connection and sit on it for the whole duration of session. And we just get messages on what has happened.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that's the monitor. So are we guaranteed to never get a chunk which includes the end of one JSON message and the start of another? Like this, if we'd get a first chunk like { a: "one",
, and a second one "b": "two }\n{ "c": "three" }
, the first + second would either still be unparseable, or it would cut off and ignore the second object (as this resets buffer
). If this is really getting a continuous stream of JSON blobs, then there must be some defined message separator? That's exactly what jsonl is, after all. Then it could append new chunks to the buffer, check if the buffer contains \n
, and if so, split it and process all parts but the last one (and keep that in the buffer).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it is guaranteed. We never saw that and the current behavior would be affected the same way as well.
The API does not try to send more messages in one reply, it just sometimes needs to split them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can't really guarantee this with a TCP socket, this would require a SEQPACKET one. AFAICS the only reliable way (and best practice everywhere) is to use jsonl. This was literally invented for streaming json messages. I just ran curl --unix-socket /run/user/1000/podman/podman.sock http://localhost/v1.12/libpod/events
and it looks correct -- this is proper jsonl.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK as incremental improvement, but it's still a hack IMHO. Thanks!
Maximum length on one message is 4096 characters. When the reply is longer than that it will come in multiple messages. In such case we need to connect them before trying to parse such message. This for example happens when there are multiple containers and `Stats` often are split into multiple messages. Fixes cockpit-project#1025
Fair enough, although I believe it should not happen, lets do this properly, hiding exception was ugly indeed as we would never learn that something is broken |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ooh, this looks really nice now - very elegant solution with the .pop()
! 🌟
hopefully unrelated flakes on rhel-9-1, retrying |
Maximum length on one message is 4096 characters. When the reply is
longer than that it will come in multiple messages. In such case we need to
connect them before trying to parse such message.
This for example happens when there are multiple containers and
Stats
oftenare split into multiple messages.
Fixes #1025