Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

otlpjsonfilereceiver: support compressed files from fileexporter #32565

Closed
jack78901 opened this issue Apr 19, 2024 · 3 comments
Closed

otlpjsonfilereceiver: support compressed files from fileexporter #32565

jack78901 opened this issue Apr 19, 2024 · 3 comments
Labels

Comments

@jack78901
Copy link

jack78901 commented Apr 19, 2024

Component(s)

receiver/otlpjsonfile

Is your feature request related to a problem? Please describe.

When I attempt to read in a compressed file that was written by the File Exporter, nothing happens. I get:

2024-04-19T18:55:11.260Z	info	fileconsumer/file.go:228	Started watching file	{"kind": "receiver", "name": "otlpjsonfile", "data_type": "metrics", "component": "fileconsumer", "path": "/tmp/log/metrics-2024-04-19T06-33-54.214.zstd"}

No data is ever read in.

I can confirm that data is in the file, as I have changed the file to not be compressed on export. I can see data is being written by the exporter, and the otlpjson receiver is able to read it in.

File Reading collector settings:

receivers:
  otlpjsonfile:
    include:
      - "/tmp/log/*.zstd"

processors:
  batch:
  memory_limiter:
    check_interval: 1s
    limit_mib: 1000
    spike_limit_mib: 200

exporters:
  prometheusremotewrite:
    endpoint: "http://prometheus:9090/api/v1/write"
    external_labels:
      instance: nexis_demo_ft
      node: nexis_demo_ft
      job: nexis_demo_ft
  debug:

extensions:
  health_check:
  pprof:
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    metrics:
      receivers: [otlpjsonfile]
      processors: [batch, memory_limiter]
      exporters: [prometheusremotewrite, debug]

File Write Collector Config:

receivers:
  hostmetrics:
    collection_interval: 1s
    scrapers:
      disk:
      filesystem:
        metrics:
          system.filesystem.utilization:
            enabled: true
  hostmetrics/system:
    collection_interval: 30s
    scrapers:
      cpu:
        metrics:
          system.cpu.logical.count:
            enabled: true
          system.cpu.physical.count:
            enabled: true
          system.cpu.utilization:
            enabled: true
      load:
      memory:
        metrics:
          system.memory.limit:
            enabled: true
          system.memory.utilization:
            enabled: true
          system.linux.memory.available:
            enabled: true
      network:
      process:
        metrics:
          process.cpu.utilization:
            enabled: true
          process.memory.utilization:
            enabled: true
          process.disk.operations:
            enabled: true
      processes:
      paging:
        metrics:
          system.paging.utilization:
            enabled: true

processors:
  batch:
  memory_limiter:
    check_interval: 1s
    limit_mib: 1000
    spike_limit_mib: 200
  resource:
    attributes:
      - key: host.id
        value: localhost
        action: insert

exporters:
  file/rotation_with_custom_settings:
    path: /scratch/tmplog/metrics.zstd
    rotation:
      max_megabytes: 250
      max_days: 30
      max_backups: 3
      localtime: false
    format: proto
    compression: zstd
  debug:

extensions:
  health_check:
  pprof:
  zpages:

service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    metrics:
      receivers: [hostmetrics, hostmetrics/system]
      processors: [batch, resource, memory_limiter]
      exporters: [file/rotation_with_custom_settings, debug]

Describe the solution you'd like

Be able to read in zstd compressed files if the file is compressed.

Describe alternatives you've considered

No response

Additional context

No response

@jack78901 jack78901 added enhancement New feature or request needs triage New item requiring triage labels Apr 19, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant