Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metricbeat stops reporting data if container is killed #3610

Closed
ruflin opened this issue Feb 17, 2017 · 0 comments · Fixed by #3612
Closed

Metricbeat stops reporting data if container is killed #3610

ruflin opened this issue Feb 17, 2017 · 0 comments · Fixed by #3612
Labels

Comments

@ruflin
Copy link
Member

ruflin commented Feb 17, 2017

Steps to reproduce:

  • Config
- module: docker
  metricsets: ["cpu"]
  hosts: ["unix:///var/run/docker.sock"]
  period: 1s
  • Start a container like docker run -it debian bash and wait until the container is picked up by metricbeat.
  • Use docker kill container-id to kill the container.

Metricbeat will stop reporting data and shutdown will be blocked.

This was first reported in #3580. I opened a separate issue as in the other issue, 2 issues are mentioned.

@ruflin ruflin added bug Metricbeat Metricbeat labels Feb 17, 2017
ruflin added a commit to ruflin/beats that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes elastic#3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
@tsg tsg closed this as completed in #3612 Feb 21, 2017
tsg pushed a commit that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes #3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
tsg pushed a commit to tsg/beats that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes elastic#3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
(cherry picked from commit 99f17d6)
ruflin added a commit to ruflin/beats that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes elastic#3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
(cherry picked from commit 99f17d6)
ruflin pushed a commit that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes #3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
(cherry picked from commit 99f17d6)
ruflin added a commit to ruflin/beats that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes elastic#3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
(cherry picked from commit 99f17d6)
tsg pushed a commit that referenced this issue Feb 21, 2017
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes #3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
(cherry picked from commit 99f17d6)
leweafan pushed a commit to leweafan/beats that referenced this issue Apr 28, 2023
No timeout was passed to the docker client. It seems in case of a killed container it can happen that the connection is hanging. To interrupt this connection, the timeout from the metricset is passed to the client. That means in case info for a container cannot be fetched, it will timeout.

This change requires that the docker module is not run with a timeout of 3s seconds, which indirectly means a period of 3s. The reason is that already the http request waits ~2s for the response. So if 1s is set as timeout, all requests will timeout.

Further changes:

* Containers without names will be ignored, as these are containers for which the data could not be fetched.
* Period was set to 1s by default instead of the period as document. This was changed.
* Add documentation node about minimal period.

Closes elastic#3610

The issue with this PR was introduce in 5.2.1 by fixing the memory leak. Before go routines just piled up, but now they caused filebeat to hang.

This needs also backport to 5.2.2
(cherry picked from commit 06ecd67)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant