Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reply from REST can come in pieces #1035

Merged
merged 1 commit into from
Jul 12, 2022
Merged

Conversation

marusak
Copy link
Member

@marusak marusak commented Jul 11, 2022

Maximum length on one message is 4096 characters. When the reply is
longer than that it will come in multiple messages. In such case we need to
connect them before trying to parse such message.

This for example happens when there are multiple containers and Stats often
are split into multiple messages.

Fixes #1025

@martinpitt
Copy link
Member

How could you not name this branch "rest-in-pieces"... 😢

@marusak marusak force-pushed the chunkZ branch 3 times, most recently from b517bd2 to bc8767f Compare July 11, 2022 10:58
@marusak
Copy link
Member Author

marusak commented Jul 11, 2022

RuntimeError: Timed out on 'set -eu
            systemctl stop podman.service podman.socket
            systemctl reset-failed podman.service podman.socket
            podman system reset --force
            killall -9 podman || true
            findmnt --list -otarget | grep /var/lib/containers/. | xargs -r umount
            '

Yeah, it takes somewhat longer to cleanup all 31 containers 🤷‍♂️ I will try to remove them outside of this cleanup

Copy link
Member

@martinpitt martinpitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for spotting! Ugh, why did we use .stream() here in the first place..

src/rest.js Show resolved Hide resolved
src/rest.js Outdated
Comment on lines 32 to 38
try {
const message = JSON.parse(buffer);
buffer = "";
callback(message);
} catch {
// Ignore, we already stored it is buffer
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is too brittle and probabilistic. Why not just let the bridge do the buffer assembly? I suggest using the original code, but as a .then() handler, instead of .stream(). We are not expecting huge replies here, and even if they were, the stream handler can't process individual chunks anyway -- so building up a buffer happens in any case. So let's just let cockpit.js do the job.

See https://cockpit-project.org/guide/latest/cockpit-http.html#cockpit-http-then

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We handle .then() as well (each user of .monitor() declares it on they own), but it means that the monitor has closed. We use monitor for streaming of events as they happen - we open the connection and sit on it for the whole duration of session. And we just get messages on what has happened.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that's the monitor. So are we guaranteed to never get a chunk which includes the end of one JSON message and the start of another? Like this, if we'd get a first chunk like { a: "one", , and a second one "b": "two }\n{ "c": "three" }, the first + second would either still be unparseable, or it would cut off and ignore the second object (as this resets buffer). If this is really getting a continuous stream of JSON blobs, then there must be some defined message separator? That's exactly what jsonl is, after all. Then it could append new chunks to the buffer, check if the buffer contains \n, and if so, split it and process all parts but the last one (and keep that in the buffer).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is guaranteed. We never saw that and the current behavior would be affected the same way as well.
The API does not try to send more messages in one reply, it just sometimes needs to split them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't really guarantee this with a TCP socket, this would require a SEQPACKET one. AFAICS the only reliable way (and best practice everywhere) is to use jsonl. This was literally invented for streaming json messages. I just ran curl --unix-socket /run/user/1000/podman/podman.sock http://localhost/v1.12/libpod/events and it looks correct -- this is proper jsonl.

test/check-application Show resolved Hide resolved
test/check-application Show resolved Hide resolved
martinpitt
martinpitt previously approved these changes Jul 12, 2022
Copy link
Member

@martinpitt martinpitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK as incremental improvement, but it's still a hack IMHO. Thanks!

Maximum length on one message is 4096 characters. When the reply is
longer than that it will come in multiple messages. In such case we need to
connect them before trying to parse such message.

This for example happens when there are multiple containers and `Stats` often
are split into multiple messages.

Fixes cockpit-project#1025
@marusak
Copy link
Member Author

marusak commented Jul 12, 2022

Fair enough, although I believe it should not happen, lets do this properly, hiding exception was ugly indeed as we would never learn that something is broken

Copy link
Member

@martinpitt martinpitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooh, this looks really nice now - very elegant solution with the .pop()! 🌟

@martinpitt
Copy link
Member

hopefully unrelated flakes on rhel-9-1, retrying

@martinpitt martinpitt merged commit 18fb5a2 into cockpit-project:main Jul 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

streaming podman metrics breaks with ~ 14 containers active
2 participants