Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fluentbit_input_records_total metric always 0 #8182

Closed
modesvops opened this issue Nov 15, 2023 · 9 comments
Closed

fluentbit_input_records_total metric always 0 #8182

modesvops opened this issue Nov 15, 2023 · 9 comments
Labels
Stale status: waiting-for-triage waiting-for-user Waiting for more information, tests or requested changes

Comments

@modesvops
Copy link

Bug Report

Describe the bug
After upgrading to 2.2.0 fluent-bit always return fluentbit_input_records_total metric as zero

To Reproduce

  • Launch fluent-bit 2.2.0
  • Use tail as input

Expected behavior
Metric should show correct results.

Screenshots
image
image

Your Environment

  • Version used: 2.2.0
  • Configuration: 2 tail inputs on kubernetes
  • Environment name and version (e.g. Kubernetes? What version?): kubernetes 1.27
  • Server type and version:
  • Operating System and version:
  • Filters and plugins: Kubernetes filter no plugins
@MrPibody7
Copy link
Collaborator

Hi @modesvops
Would you share your fluent-bit.conf file or the command line parameters used?

@MrPibody7
Copy link
Collaborator

Hi @modesvops
I made a repro in Vanilla ( non K8s) with version 2.2.0, and it works as expected, fluentbit_input_records_total metric shows correct results. The next step is to try to reproduce it in K8s.

I have the following questions:

  • What method did you use to deploy fluent-bit in Kubernetes?
  • What container runtime you are using?
  • Any information you could consider relevant.

@MrPibody7 MrPibody7 added the waiting-for-user Waiting for more information, tests or requested changes label Nov 15, 2023
@modesvops
Copy link
Author

Hi @MrPibody7
Our configuration looks like this. Logs from some namespaces goes to aws opensearch and from rest to grafana loki

[SERVICE]
  Daemon Off
  Flush 1
  Log_Level info
  Parsers_File parsers.conf
  Parsers_File custom_parsers.conf
  HTTP_Server On
  HTTP_Listen 0.0.0.0
  HTTP_Port 2020
  Health_Check On

[INPUT]
  Name tail
  Path /var/log/containers/*_namespace1_*.log, /var/log/containers/*_namespace2_*.log, /var/log/containers/*_namespace3_*.log
  multiline.parser cri
  Tag opensearch.*
  Mem_Buf_Limit 15MB
  Skip_Long_Lines off

[INPUT]
  Name tail
  Path /var/log/containers/*_namespace4_*.log, /var/log/containers/*_namespace5_*.log, /var/log/containers/*_namespace6_*.log, 
  multiline.parser cri
  Tag loki.*
  Mem_Buf_Limit 15MB
  Skip_Long_Lines off

[FILTER]
  Name kubernetes
  Match opensearch.*
  Kube_Tag_Prefix opensearch.var.log.containers.
  Merge_Log On
  Keep_Log Off
  Annotations Off
  K8S-Logging.Parser On
  K8S-Logging.Exclude On

[FILTER]
  Name kubernetes
  Match loki.*
  Kube_Tag_Prefix loki.var.log.containers.
  Merge_Log On
  Keep_Log Off
  Annotations Off
  K8S-Logging.Parser On
  K8S-Logging.Exclude On

[OUTPUT]
  Name  opensearch
  Match opensearch.*
  Host  opensearch
  Port  443
  Index $kubernetes['labels']['app.kubernetes.io/name']
  Suppress_Type_Name  On
  AWS_Auth On
  AWS_Region region
  tls     On
  Trace_Error  On

[OUTPUT]
  Name  loki
  Match loki.*
  Host  loki
  Port  443
  Labels app=$kubernetes['labels']['app.kubernetes.io/name'], container=$kubernetes['container_name'], node=$kubernetes['host'], instance=$kubernetes['pod_name'], level=$level, namespace=$kubernetes['namespace_name']
  auto_kubernetes_labels  off
  remove_keys kubernetes, stream
  tls On
  • What method did you use to deploy fluent-bit in Kubernetes?
    Official helm chart version 0.40.0
  • What container runtime you are using?
    containerd on aws eks
  • Any information you could consider relevant.
    It was working fine before upgrade to latest helm chart and fluentbit version. We was using:
    Chart: 0.28.0
    fluent-bit: 2.1.2

@nokute78
Copy link
Collaborator

I think #8148 is a similar issue.
It is the issue that total_records of input chunk is not updated.

I sent a patch #8201 for #8148

@michael-stevens
Copy link

We've also seen this bug.

@nokute78
Copy link
Collaborator

nokute78 commented Dec 3, 2023

I think this issue is fixed by 1a8dd39 of #8223

@chrono2002
Copy link

same issue
version 2.2.0

Copy link
Contributor

github-actions bot commented Jun 8, 2024

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

@github-actions github-actions bot added the Stale label Jun 8, 2024
Copy link
Contributor

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale status: waiting-for-triage waiting-for-user Waiting for more information, tests or requested changes
Projects
None yet
Development

No branches or pull requests

5 participants