-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Journalbeat sends duplicate log entries on restart #11505
Comments
I've played with this a little more over the weekend thinking it might be a config issue and selecting the correct I also confirmed that the I wasn't able to find any documentation on this exactly (and I'm not a Go developer) but the file that looks like it records the last position of where Also the systemd unit file isn't set-up to accept the |
I'm finding that messages are duplicated at random irrespective of the restart, i.e. even after it's processed all messages from seek or cursor, it duplicates some but not all messages. Please see the discussion https://discuss.elastic.co/t/duplicate-messages-created-by-journalbeat-6-7-1-1/175930 It shows others are seeing the same thing. I've not opened another bug on this since this MAY or MAY NOT be the same bug/issue. I examined one server running Fedora and journalbeat 6.7.1 and found the systemd journal file had 534 messages, but my Graylog or elasticsearch cluster had 667 messages or 133 duplicated messages. |
Looks like there is a pull request that fixes this: #12479 |
I have cleaned up the coda around reading entries and fixed an issue when iterating through journals. Closes #11505
I have cleaned up the coda around reading entries and fixed an issue when iterating through journals. Closes elastic#11505 (cherry picked from commit 3c9734c)
I have cleaned up the coda around reading entries and fixed an issue when iterating through journals. Closes elastic#11505
Original description: I have cleaned up the coda around reading entries and fixed an issue when iterating through journals. Closes elastic#11505 (cherry picked from commit 3c9734c) Closes elastic#13123
6.7.0
CentOS 7.6.1810
It seems that when
journalbeat
is restarted instead of picking up where it left off it goes back much further (by 10's of thousands of events) and therefore you get a bit of a spike in traffic as well as MANY duplicated logs. If using Redis like we are this can also overwhelm your Redis box.Reproduce:
journalbeat
.If started interactively it seems to work OK:
# /usr/share/journalbeat/bin/journalbeat -c /etc/journalbeat/journalbeat.yml -path.home /usr/share/journalbeat -path.config /etc/journalbeat -path.data /var/lib/journalbeat -path.logs /var/log/journalbeat -d beat,input
The problem only seems to appear when daemonizing (either via
daemonize
or viasystemd
):# daemonize /usr/share/journalbeat/bin/journalbeat -c /etc/journalbeat/journalbeat.yml -path.home /usr/share/journalbeat -path.config /etc/journalbeat -path.data /var/lib/journalbeat -path.logs /var/log/journalbeat -d beat,input
The text was updated successfully, but these errors were encountered: