Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create CODEOWNERS #1

Merged
merged 1 commit into from
Jul 11, 2019
Merged

Create CODEOWNERS #1

merged 1 commit into from
Jul 11, 2019

Conversation

bogdandrutu
Copy link
Member

No description provided.

@bogdandrutu bogdandrutu merged commit 1aa826a into master Jul 11, 2019
@bogdandrutu bogdandrutu deleted the bogdandrutu-patch-2 branch July 11, 2019 16:03
mx-psi referenced this pull request in DataDog/opentelemetry-collector-contrib Sep 21, 2020
commit 99129fb96e29e9c1a92da00b7e3f8efcae8a31e8
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Sep 3 18:10:28 2020 +0200

    Handle namespace at initialization time

commit babca25927926a60c0c416294af3aadf784d41b9
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Sep 3 17:23:53 2020 +0200

    Initialize on a separate function
    This way the variables can be checked without worrying about the env

commit 24d0cb4cc566fa5313a8650c904a27bea68bf555
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Sep 3 14:30:35 2020 +0200

    Check environment variables for unified service tagging

commit 6695f8297ab8b1fcae71b05acb027c4a0992e3a0
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Wed Sep 2 14:57:37 2020 +0200

    Add support for sending metrics through the API
    - Use datadog.Metric type for simplicity
    - Get host if unset

commit c366603
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Wed Sep 2 09:56:24 2020 +0200

    Disable Queue and Retry settings (#72)

    These are handled by the statsd package.
    OpenTelemetry docs are confusing and the default configuration (disabled)
    is different from the one returned by "GetDefault..." functions

commit a660b56
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Tue Sep 1 15:26:14 2020 +0200

    Add support for summary and distribution metric types (#65)

    * Add support for summary metric type

    * Add support for distribution metrics

    * Refactor metrics construction
    - Drop name in Metrics (now they act as Metric values)
    - Refactor constructor so that errors happen at compile-time

    * Report Summary total sum and count values
    Snapshot values are not filled in by OpenTelemetry

    * Report p00 and p100 as `.min` and `.max`
    This is more similar to what we do for our own non-additive type

    * Keep hostname if it has not been overridden

commit c95adc4
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Aug 27 13:00:02 2020 +0200

    Update dependencies and `make gofmt`
    The collector was updated to 0.9.0 upstream

commit 20afb0e
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Wed Aug 26 18:25:49 2020 +0200

    Refactor configuration (#45)

    * Refactor configuration
    * Implement telemetry and tags configuration handling
    * Update example configuration and README file
    Co-authored-by: Kylian Serrania <kylian.serrania@datadoghq.com>

commit fdc98b5
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Fri Aug 21 11:09:08 2020 +0200

    Initial DogStatsD implementation (#15)

    Initial metrics exporter through DogStatsD with support for all metric types but summary and distribution

commit e848a60
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Fri Aug 21 10:42:45 2020 +0200

    Bump collector version

commit 58be9a4
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Aug 6 10:04:32 2020 +0200

    Address linter

commit 695430c
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Tue Aug 4 13:28:01 2020 +0200

    Fix field name error

    MetricsEndpoint was renamed to MetricsURL

commit 168b319
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Mon Aug 3 11:05:01 2020 +0200

    Create initial outline for Datadog exporter (#1)

    * Add support for basic configuration options
    * Documents configuration options
mx-psi referenced this pull request in DataDog/opentelemetry-collector-contrib Sep 21, 2020
* Add support for basic configuration options
* Documents configuration options
bogdandrutu pushed a commit that referenced this pull request Sep 30, 2020
* Add DataDog exporter back from old fork
commit 99129fb96e29e9c1a92da00b7e3f8efcae8a31e8
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Sep 3 18:10:28 2020 +0200

    Handle namespace at initialization time

commit babca25927926a60c0c416294af3aadf784d41b9
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Sep 3 17:23:53 2020 +0200

    Initialize on a separate function
    This way the variables can be checked without worrying about the env

commit 24d0cb4cc566fa5313a8650c904a27bea68bf555
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Sep 3 14:30:35 2020 +0200

    Check environment variables for unified service tagging

commit 6695f8297ab8b1fcae71b05acb027c4a0992e3a0
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Wed Sep 2 14:57:37 2020 +0200

    Add support for sending metrics through the API
    - Use datadog.Metric type for simplicity
    - Get host if unset

commit c366603
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Wed Sep 2 09:56:24 2020 +0200

    Disable Queue and Retry settings (#72)

    These are handled by the statsd package.
    OpenTelemetry docs are confusing and the default configuration (disabled)
    is different from the one returned by "GetDefault..." functions

commit a660b56
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Tue Sep 1 15:26:14 2020 +0200

    Add support for summary and distribution metric types (#65)

    * Add support for summary metric type

    * Add support for distribution metrics

    * Refactor metrics construction
    - Drop name in Metrics (now they act as Metric values)
    - Refactor constructor so that errors happen at compile-time

    * Report Summary total sum and count values
    Snapshot values are not filled in by OpenTelemetry

    * Report p00 and p100 as `.min` and `.max`
    This is more similar to what we do for our own non-additive type

    * Keep hostname if it has not been overridden

commit c95adc4
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Aug 27 13:00:02 2020 +0200

    Update dependencies and `make gofmt`
    The collector was updated to 0.9.0 upstream

commit 20afb0e
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Wed Aug 26 18:25:49 2020 +0200

    Refactor configuration (#45)

    * Refactor configuration
    * Implement telemetry and tags configuration handling
    * Update example configuration and README file
    Co-authored-by: Kylian Serrania <kylian.serrania@datadoghq.com>

commit fdc98b5
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Fri Aug 21 11:09:08 2020 +0200

    Initial DogStatsD implementation (#15)

    Initial metrics exporter through DogStatsD with support for all metric types but summary and distribution

commit e848a60
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Fri Aug 21 10:42:45 2020 +0200

    Bump collector version

commit 58be9a4
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Thu Aug 6 10:04:32 2020 +0200

    Address linter

commit 695430c
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Tue Aug 4 13:28:01 2020 +0200

    Fix field name error

    MetricsEndpoint was renamed to MetricsURL

commit 168b319
Author: Pablo Baeyens <pablo.baeyens@datadoghq.com>
Date:   Mon Aug 3 11:05:01 2020 +0200

    Create initial outline for Datadog exporter (#1)

    * Add support for basic configuration options
    * Documents configuration options

* go mod tidy

* Address feedback from upstream PR we did not merge (#1)

* Backport changes from upstream PR

Remove `err` from MapMetrics

* Remove usage of pdatautil

* Fix tests

* Use TCPAddr

* Review which functions should be private

* Remove DogStatsD mode (#2)

* Remove DogStatsD mode

* go mod tidy

* Remove mentions to DogStatSD

* Improve test coverage (#3)

* Improve test coverage

Added unit tests for
- API key censoring
- Hostname
- Metrics exporter

Renamed test and implementation files for consistency

* Add one additional test

* Remove client validation (#6)

The zorkian API does not validate the API key unless you also have
an application key, even though the endpoint works without it.

I am removing this validation until this gets fixed on the zorkian library

* Keep only configuration and factory methods

Following the contribution guidelines we need to make a first PR
with this

* Use latest version of collector

* Remove `report_percentiles` option
It is not being used as of now until the OTLP metrics format
stabilizes and we have a Summary type metric again

* Correct configuration

The API key is now a required setting

* Remove test not relevant for this PR

* Remove unnecessary imports after removing test

* Address review comment

* Apply suggestions from code review

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>

* Separate documentation into two examples
One example with the minimal configuration, for sending to `datadoghq.com`
and a second one for sending to `datadoghq.eu`

Co-authored-by: Tigran Najaryan <4194920+tigrannajaryan@users.noreply.github.com>
bogdandrutu pushed a commit that referenced this pull request Oct 23, 2020
* Restructure buildCWMetric logic (#1)

* Restructure code to remove duplicated logic

* Update format

* Improve function and variable names

* Extract logic for dimension creation and add test

* Implement minor fixes

* Remove changes to go.sum

* Implement tests for getCWMetrics

* Implement tests for buildCWMetric

* Format metric_translator_test.go

* Run with gofmt -s

* Disregard ordering of dimensions in test case

* Perform dimension equality checking as a helper function
tigrannajaryan pushed a commit that referenced this pull request Sep 21, 2021
Adds a Cloud Foundry metric receiver which reads metrics from Cloud Foundry Reverse Log Proxy Gateway. More details available in the `README.md`. `make gotidy` seems to have made plenty of subtle changes to `go.sum` files, not sure if this is normal. This PR contains the overall structure, documentation, implementation for config and factory, but does NOT contain the implementation of the receiver and does not enable the component, as that will come in separate PRs later.

**Link to tracking Issue:** #5320

**Testing:** Unit tests. Manual testing was performed against Tanzu Application Service (TAS) versions 2.7, 2.8 and 2.11. Considered adding an integration test with mocked HTTP servers acting as endpoints where the HTTP server would provide a constant response (prerecorded from the real TAS traffic), but not sure if mocks would make more sense.

**Documentation:** `README.md` and `doc.go` for the new receiver module were added.
locmai referenced this pull request in locmai/opentelemetry-collector-contrib Jan 13, 2022
damemi referenced this pull request in damemi/opentelemetry-collector-contrib Mar 21, 2022
exporter/googlecloudloggingexporter: did it
djaglowski added a commit that referenced this pull request May 26, 2022
* add vcenter vSAN collection

* checkpoint on getting property collection working

* checkpoint before integration test

* dual receivers under root receiver pointer

* checkpoint before updated mdatagen

* use syslog receiver rather than tcplogreceiver

* getting more performance counter refinements

* remove unneccessary component addition

* try to fix go.mod resolution issues

* try to fix go.mod resolution issues pt 2

* addlicense

* fix go.mod by fixing require directive

* add readme for metrics

* update readme

* fix go.mod referring nonexistent version

* add performance manager tests

* more tests

* add more attributes to virtual machines and host systems

* add more attributes to virtual machines and host systems

* spike changelog entry

* fix go.mod in both places

* fix go.mod in configschema

* add // import github.com/open-telemetry/opentelemetry-collector-contrib/receiver/vmwarevcenterreceiver to imports

* add quotations

* add to receiver lifecycle

* remove extra go generate direction

* fix typo of utilizaiton in metric description

* small changes to interval id in performance queries to be more consistent

* PR feedback including omitting company name prefix

* PR feedback to not fail starting the component on potential network failures

* minor grammar correction in vcenter readme

* update expected metrics

* update host_effective attribute value

* remove PerformanceInterval customizability

* add to codeowners

* fix indentation on merge conflict

* fix changelog entry place so its in the new components section

* update to be on 0.49.0 of the collector'

* add PR number to changelog

* regenerate with newer version of mdatagen

* move error log if unable to connect on start to receiver.Start() rather than scraper.Start()

* fix test cases from last commit

* minor update to config with tests

* fix metric description

* use utc for host vsan collection as well

* update comments of public facing methods

* return errors on getting clusters to the scraper errors

* PR feedback #1

* instantiate new client if client is nil

* update all descriptions to have punctuation

* three more descs

* move ensureReceiver up to once we validated as a config

* some more PR feedback

* looking into race conditions

* run go tidy

* fix import order and remove unneccessary mutex

* remove mutex from struct

* refactor client to responsible for knowing if the vsan endpoints are reachable

* fix integration test referencing old var

* change metrics.metrics => metrics.settings, update client pr feedback

* remove vSAN collection temporarily

* remove extra metric attributes for vSAN

* remove vsan specific variables

* clean up host PerfCounter disk latency metrics and fix some descriptions to better reflect interval

* add 20s interval to extended documentation as needed

* mdatagen fixes

* add integration test metric scrape

* fix import order

* go up to 0.49.1

* gotidy

* add replace directive for semconv

* gotidy fixes

* fix component not being on 0.50.0

* update to v0.50.1-0.20220429151328-041f39835df7

* use newer mdatagen

* remove any logging functionality change && update documentation

* fix integration test from flattening of config

* fix scraper start not erroring if connection cannot be established

* make scrapertest less flaky

* format test json

* Apply suggestions from code review

Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>

* adjust metric definition for vcenter.host.disk.throughput

* remove comment and move pm level 2 metrics to appropriate section

* try to be respective of datacenters

* fix only vCenter server functionality

* try building out a mock server for test coverage

* make goporto

* fix build issues

* use latest mdatagen

* add newlines to ends of xml recordings

* fix integration test

* moved around scrapererrors because now the receiver is datacenter dependent

* try and do an audit of performance metrics and requests/responses

* update testdata with correct units

* make tidy

* make tidy

* update collector version

* fix local testing code including modules

* remove deprecated use of commonponenterror

* pr feedback; add method of collection recording, return poweredOn/poweredOff VMs

* remove content.json

* fix description change in scraper_test.go

* update collector version

* bump replaced module; rebuild load tests

* fix alibaba version auto localizing

Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
kentquirk referenced this pull request in McSick/opentelemetry-collector-contrib Jun 14, 2022
…y#9224)

* add vcenter vSAN collection

* checkpoint on getting property collection working

* checkpoint before integration test

* dual receivers under root receiver pointer

* checkpoint before updated mdatagen

* use syslog receiver rather than tcplogreceiver

* getting more performance counter refinements

* remove unneccessary component addition

* try to fix go.mod resolution issues

* try to fix go.mod resolution issues pt 2

* addlicense

* fix go.mod by fixing require directive

* add readme for metrics

* update readme

* fix go.mod referring nonexistent version

* add performance manager tests

* more tests

* add more attributes to virtual machines and host systems

* add more attributes to virtual machines and host systems

* spike changelog entry

* fix go.mod in both places

* fix go.mod in configschema

* add // import github.com/open-telemetry/opentelemetry-collector-contrib/receiver/vmwarevcenterreceiver to imports

* add quotations

* add to receiver lifecycle

* remove extra go generate direction

* fix typo of utilizaiton in metric description

* small changes to interval id in performance queries to be more consistent

* PR feedback including omitting company name prefix

* PR feedback to not fail starting the component on potential network failures

* minor grammar correction in vcenter readme

* update expected metrics

* update host_effective attribute value

* remove PerformanceInterval customizability

* add to codeowners

* fix indentation on merge conflict

* fix changelog entry place so its in the new components section

* update to be on 0.49.0 of the collector'

* add PR number to changelog

* regenerate with newer version of mdatagen

* move error log if unable to connect on start to receiver.Start() rather than scraper.Start()

* fix test cases from last commit

* minor update to config with tests

* fix metric description

* use utc for host vsan collection as well

* update comments of public facing methods

* return errors on getting clusters to the scraper errors

* PR feedback #1

* instantiate new client if client is nil

* update all descriptions to have punctuation

* three more descs

* move ensureReceiver up to once we validated as a config

* some more PR feedback

* looking into race conditions

* run go tidy

* fix import order and remove unneccessary mutex

* remove mutex from struct

* refactor client to responsible for knowing if the vsan endpoints are reachable

* fix integration test referencing old var

* change metrics.metrics => metrics.settings, update client pr feedback

* remove vSAN collection temporarily

* remove extra metric attributes for vSAN

* remove vsan specific variables

* clean up host PerfCounter disk latency metrics and fix some descriptions to better reflect interval

* add 20s interval to extended documentation as needed

* mdatagen fixes

* add integration test metric scrape

* fix import order

* go up to 0.49.1

* gotidy

* add replace directive for semconv

* gotidy fixes

* fix component not being on 0.50.0

* update to v0.50.1-0.20220429151328-041f39835df7

* use newer mdatagen

* remove any logging functionality change && update documentation

* fix integration test from flattening of config

* fix scraper start not erroring if connection cannot be established

* make scrapertest less flaky

* format test json

* Apply suggestions from code review

Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>

* adjust metric definition for vcenter.host.disk.throughput

* remove comment and move pm level 2 metrics to appropriate section

* try to be respective of datacenters

* fix only vCenter server functionality

* try building out a mock server for test coverage

* make goporto

* fix build issues

* use latest mdatagen

* add newlines to ends of xml recordings

* fix integration test

* moved around scrapererrors because now the receiver is datacenter dependent

* try and do an audit of performance metrics and requests/responses

* update testdata with correct units

* make tidy

* make tidy

* update collector version

* fix local testing code including modules

* remove deprecated use of commonponenterror

* pr feedback; add method of collection recording, return poweredOn/poweredOff VMs

* remove content.json

* fix description change in scraper_test.go

* update collector version

* bump replaced module; rebuild load tests

* fix alibaba version auto localizing

Co-authored-by: Daniel Jaglowski <jaglows3@gmail.com>
codeboten pushed a commit that referenced this pull request Nov 23, 2022
blumamir pushed a commit to blumamir/opentelemetry-collector-contrib that referenced this pull request Mar 26, 2023
sky333999 pushed a commit to sky333999/opentelemetry-collector-contrib that referenced this pull request May 23, 2023
Co-authored-by: matianjun1 <mtj334510983@163.com>
MaxKsyunz referenced this pull request in Bit-Quill/opentelemetry-collector-contrib Jun 2, 2023

Signed-off-by: MaxKsyunz <maxk@bitquilltech.com>
dmitryax pushed a commit that referenced this pull request Aug 1, 2023
**Description:** The metadata.yml for the SSH check receiver currently
documents a resource attribute containing the SSH endpoint but this is
not emitted. This PR updates the receiver to include this resource
attribute.

**Link to tracking Issue:** #24441 

**Testing:**

Example collector config:
```yaml
receivers:
  sshcheck:
    endpoint: 13.245.150.131:22
    username: ec2-user
    key_file: /Users/dewald.dejager/.ssh/sandbox.pem
    collection_interval: 15s
    known_hosts: /Users/dewald.dejager/.ssh/known_hosts
    ignore_host_key: false
    resource_attributes:
      "ssh.endpoint":
        enabled: true

exporters:
  logging:
    verbosity: detailed
  prometheus:
    endpoint: 0.0.0.0:8081
    resource_to_telemetry_conversion:
      enabled: true

service:
  pipelines:
    metrics:
      receivers: [sshcheck]
      exporters: [logging, prometheus]
```

The log output looks like this:
```
2023-07-30T16:52:38.724+0200    info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "logging", "resource metrics": 1, "metrics": 2, "data points": 2}
2023-07-30T16:52:38.724+0200    info    ResourceMetrics #0
Resource SchemaURL: 
Resource attributes:
     -> ssh.endpoint: Str(13.245.150.131:22)
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope otelcol/sshcheckreceiver 0.82.0-dev
Metric #0
Descriptor:
     -> Name: sshcheck.duration
     -> Description: Measures the duration of SSH connection.
     -> Unit: ms
     -> DataType: Gauge
NumberDataPoints #0
StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC
Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC
Value: 319
Metric #1
Descriptor:
     -> Name: sshcheck.status
     -> Description: 1 if the SSH client successfully connected, otherwise 0.
     -> Unit: 1
     -> DataType: Sum
     -> IsMonotonic: false
     -> AggregationTemporality: Cumulative
NumberDataPoints #0
StartTimestamp: 2023-07-30 14:52:22.381672 +0000 UTC
Timestamp: 2023-07-30 14:52:38.404003 +0000 UTC
Value: 1
```

And the Prometheus metrics look like this:
```
# HELP sshcheck_duration Measures the duration of SSH connection.
# TYPE sshcheck_duration gauge
sshcheck_duration{ssh_endpoint="13.245.150.131:22"} 311
# HELP sshcheck_status 1 if the SSH client successfully connected, otherwise 0.
# TYPE sshcheck_status gauge
sshcheck_status{ssh_endpoint="13.245.150.131:22"} 1
```
codeboten pushed a commit that referenced this pull request Aug 16, 2023
**Description:** 

Adding command line argument `--status-code` to `telemetrygen traces`,
which accepts `(Unset,Error,Ok)` (case sensitive) or the enum equivalent
of `(0,1,2)`.

Running 

```shell
telemetrygen traces --otlp-insecure --traces 1 --status-code 1
```

against a minimal local collector yields

```txt
2023-07-29T21:27:57.862+0100	info	ResourceSpans #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0
Resource attributes:
     -> service.name: Str(telemetrygen)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope telemetrygen
Span #0
    Trace ID       : f6dc4be32c78b9999c69d504a79e68c1
    Parent ID      : 4e2cd6e0e90cf2ea
    ID             : 20835413e32d26a5
    Name           : okey-dokey
    Kind           : Server
    Start time     : 2023-07-29 20:27:57.861602 +0000 UTC
    End time       : 2023-07-29 20:27:57.861726 +0000 UTC
    Status code    : Error
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-client)
Span #1
    Trace ID       : f6dc4be32c78b9999c69d504a79e68c1
    Parent ID      :
    ID             : 4e2cd6e0e90cf2ea
    Name           : lets-go
    Kind           : Client
    Start time     : 2023-07-29 20:27:57.861584 +0000 UTC
    End time       : 2023-07-29 20:27:57.861726 +0000 UTC
    Status code    : Error
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-server)
```

and similarly (the string version)

```shell
telemetrygen traces --otlp-insecure --traces 1 --status-code '"Ok"'
```

produces 

```txt
Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0
Resource attributes:
     -> service.name: Str(telemetrygen)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope telemetrygen
Span #0
    Trace ID       : dfd830da170acfe567b12f87685d7917
    Parent ID      : 8e15b390dc6a1ccc
    ID             : 165c300130532072
    Name           : okey-dokey
    Kind           : Server
    Start time     : 2023-07-29 20:29:16.026965 +0000 UTC
    End time       : 2023-07-29 20:29:16.027089 +0000 UTC
    Status code    : Ok
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-client)
Span #1
    Trace ID       : dfd830da170acfe567b12f87685d7917
    Parent ID      :
    ID             : 8e15b390dc6a1ccc
    Name           : lets-go
    Kind           : Client
    Start time     : 2023-07-29 20:29:16.026956 +0000 UTC
    End time       : 2023-07-29 20:29:16.027089 +0000 UTC
    Status code    : Ok
    Status message :
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-server)
```

The default is `Unset` which is the current behaviour.

**Link to tracking Issue:**

24286

**Testing:**

Added unit tests which covers both valid and invalid inputs.

**Documentation:**

Command line arguments are self documenting via the usage info in the
flag.

Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
hkfgo pushed a commit to hkfgo/opentelemetry-collector-contrib that referenced this pull request Aug 28, 2023
mx-psi added a commit that referenced this pull request Nov 16, 2023
#29116)

**Description:** 

As originally proposed in #26991 before I got distracted

Exposes the duration of generated spans as a command line parameter. It
uses a `DurationVar` flag so units can be easily provided and are
automatically applied.

Example usage:

```bash
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10ns # nanoseconds
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10us # microseconds
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10ms # milliseconds
telemetrygen traces --traces 100 --otlp-insecure --span-duration 10s # seconds
```

**Testing:** 

Ran without the argument provided `telemetrygen traces --traces 1
--otlp-insecure` and seen spans publishing with the default value.

Ran again with the argument provided: `telemetrygen traces --traces 1
--otlp-insecure --span-duration 1s`

And observed the expected output:

```
Resource SchemaURL: https://opentelemetry.io/schemas/1.4.0
Resource attributes:
     -> service.name: Str(telemetrygen)
ScopeSpans #0
ScopeSpans SchemaURL: 
InstrumentationScope telemetrygen 
Span #0
    Trace ID       : 8b441587ffa5820688b87a6b511d634c
    Parent ID      : 39faad428638791b
    ID             : 88f0886894bd4ee2
    Name           : okey-dokey
    Kind           : Server
    Start time     : 2023-11-12 02:05:07.97443 +0000 UTC
    End time       : 2023-11-12 02:05:08.97443 +0000 UTC
    Status code    : Unset
    Status message : 
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-client)
Span #1
    Trace ID       : 8b441587ffa5820688b87a6b511d634c
    Parent ID      : 
    ID             : 39faad428638791b
    Name           : lets-go
    Kind           : Client
    Start time     : 2023-11-12 02:05:07.97443 +0000 UTC
    End time       : 2023-11-12 02:05:08.97443 +0000 UTC
    Status code    : Unset
    Status message : 
Attributes:
     -> net.peer.ip: Str(1.2.3.4)
     -> peer.service: Str(telemetrygen-server)
	{"kind": "exporter", "data_type": "traces", "name": "debug"}
```

**Documentation:** No documentation added.

---------

Co-authored-by: Pablo Baeyens <pbaeyens31+github@gmail.com>
dbason pushed a commit to open-panoptes/opentelemetry-collector-contrib that referenced this pull request Dec 7, 2023
[chore] replace adapter logger with logr interface
djaglowski pushed a commit that referenced this pull request May 14, 2024
**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
This PR implements the new container logs parser as it was proposed at
#31959.

**Link to tracking Issue:** <Issue number if applicable>
#31959

**Testing:** <Describe what testing was performed and which tests were
added.>

Added unit tests. Providing manual testing steps as well:

### How to test this manually

1. Using the following config file:
```yaml
receivers:
  filelog:
    start_at: end
    include_file_name: false
    include_file_path: true
    include:
    - /var/log/pods/*/*/*.log
    operators:
      - id: container-parser
        type: container
        output: m1
      - type: move
        id: m1
        from: attributes.k8s.pod.name
        to: attributes.val
      - id: some
        type: add
        field: attributes.key2.key_in
        value: val2

exporters:
  debug:
    verbosity: detailed

service:
  pipelines:
    logs:
      receivers: [filelog]
      exporters: [debug]
      processors: []
```
2. Start the collector:
`./bin/otelcontribcol_linux_amd64 --config
~/otelcol/container_parser/config.yaml`
3. Use the following bash script to create some logs:
```bash
#! /bin/bash

echo '2024-04-13T07:59:37.505201169-05:00 stdout P This is a very very long crio line th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '{"log":"INFO: log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log
echo '2024-04-13T07:59:37.505201169-05:00 stdout F at is awesome! crio is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '2021-06-22T10:27:25.813799277Z stdout P some containerd log th' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log
echo '{"log":"INFO: another log line here","stream":"stdout","time":"2029-03-30T08:31:20.545192187Z"}' >> /var/log/pods/kube-controller-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d6/kube-controller/1.log
echo '2021-06-22T10:27:25.813799277Z stdout F at is super awesome! Containerd is awesome' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log



echo '2024-04-13T07:59:37.505201169-05:00 stdout F standalone crio line which is awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler43/1.log
echo '2021-06-22T10:27:25.813799277Z stdout F standalone containerd line that is super awesome!' >> /var/log/pods/kube-scheduler-kind-control-plane_49cc7c1fd3702c40b2686ea7486091d3/kube-scheduler44/1.log
```
4. Run the above as a bash script to verify any parallel processing.
Verify that the output is correct.


### Test manually on k8s

1. `make docker-otelcontribcol && docker tag otelcontribcol
otelcontribcol-dev:0.0.1 && kind load docker-image
otelcontribcol-dev:0.0.1`
2. Install using the following helm values file:
```yaml
mode: daemonset
presets:
  logsCollection:
    enabled: true

image:
  repository: otelcontribcol-dev
  tag: "0.0.1"
  pullPolicy: IfNotPresent

command:
  name: otelcontribcol

config:
  exporters:
    debug:
      verbosity: detailed
  receivers:
    filelog:
      start_at: end
      include_file_name: false
      include_file_path: true
      exclude:
        - /var/log/pods/default_daemonset-opentelemetry-collector*_*/opentelemetry-collector/*.log
      include:
        - /var/log/pods/*/*/*.log
      operators:
        - id: container-parser
          type: container
          output: some
        - id: some
          type: add
          field: attributes.key2.key_in
          value: val2


  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [batch]
        exporters: [debug]
```
3. Check collector's output to verify the logs are parsed properly:
```console
2024-05-10T07:52:02.307Z	info	LogsExporter	{"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2}
2024-05-10T07:52:02.307Z	info	ResourceLog #0
Resource SchemaURL: 
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-05-10 07:52:02.046236071 +0000 UTC
Timestamp: 2024-05-10 07:52:01.92533954 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 07:52:01)
Attributes:
     -> log: Map({"iostream":"stdout"})
     -> time: Str(2024-05-10T07:52:01.92533954Z)
     -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}})
     -> logtag: Str(F)
     -> key2: Map({"key_in":"val2"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log)
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-05-10 07:52:02.046411602 +0000 UTC
Timestamp: 2024-05-10 07:52:02.027386192 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 07:52:02)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-6f6mn_1069e46b-03b2-4532-a71f-aaec06c0197b/busybox/0.log)
     -> time: Str(2024-05-10T07:52:02.027386192Z)
     -> log: Map({"iostream":"stdout"})
     -> logtag: Str(F)
     -> k8s: Map({"container":{"name":"busybox","restart_count":"0"},"namespace":{"name":"default"},"pod":{"name":"daemonset-logs-6f6mn","uid":"1069e46b-03b2-4532-a71f-aaec06c0197b"}})
     -> key2: Map({"key_in":"val2"})
Trace ID: 
Span ID: 
Flags: 0
...
```


**Documentation:** <Describe the documentation added.>  Added

Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
codeboten pushed a commit that referenced this pull request May 30, 2024
**Description:** <Describe what has changed.>
Using the DB span example below, X-Ray exporter failed to generate the
expected DB call subsegment names because it could not parse JDBC
connection strings that start with the `jdbc:` prefix.
```
Span #1
    Trace ID       : 663a0b68a5e3849c09c07f914b3df738
    Parent ID      : 1052e2a4a2516884
    ID             : 374de78b552e23c2
    Name           : orders@no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com
    Kind           : Client
    Start time     : 2024-05-07 11:07:20.62 +0000 UTC
    End time       : 2024-05-07 11:07:20.624 +0000 UTC
    Status code    : Unset
    Status message :
Attributes:
     -> db.connection_string: Str(jdbc:mysql://no-appsignals-mysql-1.cnkqok6c8mo1.eu-west-1.rds.amazonaws.com:3306)
     -> db.name: Str(orders)
     -> db.system: Str(MySQL)
     -> db.user: Str(myuser@10.0.149.233)
```

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>
local tests
djaglowski pushed a commit that referenced this pull request Jun 4, 2024
…33353)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
Container parser should add k8s metadata as resource attributes and not
as log record attributes.

**Link to tracking Issue:** <Issue number if applicable> Fixes
#33341

**Testing:** <Describe what testing was performed and which tests were
added.>
Manual testing on local k8s cluster:

```console
2024-06-04T06:40:08.219Z	info	ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> k8s.pod.uid: Str(d5ecc924-e255-4525-b5be-6437939b1e4d)
     -> k8s.container.name: Str(busybox)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.name: Str(daemonset-logs-dhzcq)
     -> k8s.container.restart_count: Str(0)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-06-04 06:40:08.007370503 +0000 UTC
Timestamp: 2024-06-04 06:40:07.855932421 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 06:40:07)
Attributes:
     -> logtag: Str(F)
     -> key2: Map({"key_in":"val2"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log)
     -> time: Str(2024-06-04T06:40:07.855932421Z)
     -> log.iostream: Str(stdout)
Trace ID: 
Span ID: 
Flags: 0
LogRecord #1
ObservedTimestamp: 2024-06-04 06:40:08.007451031 +0000 UTC
Timestamp: 2024-06-04 06:40:07.957875321 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 06:40:07)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-dhzcq_d5ecc924-e255-4525-b5be-6437939b1e4d/busybox/0.log)
     -> log.iostream: Str(stdout)
     -> time: Str(2024-06-04T06:40:07.957875321Z)
     -> key2: Map({"key_in":"val2"})
     -> logtag: Str(F)
Trace ID: 
Span ID: 
Flags: 0
```

**Documentation:** <Describe the documentation added.> ~

---------

Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
andrzej-stencel pushed a commit that referenced this pull request Jul 3, 2024
…try.log_response_body` config (#33854)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->
- Add `telemetry.log_request_body` and `telemetry.log_response_body`
config for debugging. Debug log will contain field `request_body` and/or
`response_body` in the same log line instead of separate lines to avoid
interleaved log lines.
- Change "Request failed" log level to debug.

Output:
```
2024-07-02T14:09:24.983+0100	debug	elasticsearchexporter/elasticsearch_bulk.go:67	Request roundtrip completed.	{"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "response_body": "{\"version\":{\"number\":\"1.2.3\"}}\n", "path": "/", "method": "GET", "duration": 0.000865486, "status": "200 OK"}
2024-07-02T14:09:24.984+0100	debug	elasticsearchexporter/elasticsearch_bulk.go:67	Request roundtrip completed.	{"kind": "exporter", "data_type": "logs", "name": "elasticsearch", "request_body": "{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_1\"},\"Body\":\"Load Generator Counter #0\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n{\"create\":{\"_index\":\"logs-test-idx\"}}\n{\"@timestamp\":\"2024-07-02T13:09:24.970187592Z\",\"Attributes\":{\"a\":\"test\",\"b\":5,\"batch_index\":\"batch_1\",\"c\":3,\"d\":true,\"item_index\":\"item_2\"},\"Body\":\"Load Generator Counter #1\",\"Scope\":{\"name\":\"\",\"version\":\"\"},\"SeverityNumber\":11,\"SeverityText\":\"INFO3\",\"TraceFlags\":1}\n", "response_body": "{\"took\":0,\"errors\":false,\"items\":[{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}},{\"create\":{\"_index\":\"logs-test-idx\",\"_id\":\"\",\"_version\":0,\"result\":\"\",\"status\":201,\"_seq_no\":0,\"_primary_term\":0,\"_shards\":{\"total\":0,\"successful\":0,\"failed\":0},\"error\":{\"type\":\"\",\"reason\":\"\",\"caused_by\":{\"type\":\"\",\"reason\":\"\"}}}}]}\n", "path": "/_bulk", "method": "POST", "duration": 0.000539979, "status": "200 OK"}
```

Required config to log
```
exporters:
  elasticsearch:
    telemetry:
      log_request_body: true
      log_response_body: true
    
service:
  telemetry:
    logs:
      level: debug
```

For easier analysis, limit the size of request body size. Use
`num_workers`=1 and lower `flush.bytes` and/or `flush.interval`.

**Link to tracking Issue:** <Issue number if applicable>

**Testing:** <Describe what testing was performed and which tests were
added.>

Manually verified with a modified integration test.

**Documentation:** <Describe the documentation added.>
@yanfeng1992 yanfeng1992 mentioned this pull request Aug 6, 2024
3 tasks
TylerHelmuth added a commit that referenced this pull request Sep 21, 2024
… Histo --> Histogram (#33824)

## Description

This PR adds a custom metric function to the transformprocessor to
convert exponential histograms to explicit histograms.

Link to tracking issue: Resolves #33827

**Function Name**
```
convert_exponential_histogram_to_explicit_histogram
```

**Arguments:**

- `distribution` (_upper, midpoint, uniform, random_)
- `ExplicitBoundaries: []float64`

**Usage example:**

```yaml
processors:
  transform:
    error_mode: propagate
    metric_statements:
    - context: metric
      statements:
        - convert_exponential_histogram_to_explicit_histogram("random", [10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]) 
```

**Converts:**

```
Resource SchemaURL: 
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: response_time
     -> Description: 
     -> Unit: 
     -> DataType: ExponentialHistogram
     -> AggregationTemporality: Delta
ExponentialHistogramDataPoints #0
Data point attributes:
     -> metric_type: Str(timing)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-07-31 09:35:25.212037 +0000 UTC
Count: 44
Sum: 999.000000
Min: 40.000000
Max: 245.000000
Bucket (32.000000, 64.000000], Count: 10
Bucket (64.000000, 128.000000], Count: 22
Bucket (128.000000, 256.000000], Count: 12
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
```

**To:**

```
Resource SchemaURL: 
ScopeMetrics #0
ScopeMetrics SchemaURL: 
InstrumentationScope  
Metric #0
Descriptor:
     -> Name: response_time
     -> Description: 
     -> Unit: 
     -> DataType: Histogram
     -> AggregationTemporality: Delta
HistogramDataPoints #0
Data point attributes:
     -> metric_type: Str(timing)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-07-30 21:37:07.830902 +0000 UTC
Count: 44
Sum: 999.000000
Min: 40.000000
Max: 245.000000
ExplicitBounds #0: 10.000000
ExplicitBounds #1: 20.000000
ExplicitBounds #2: 30.000000
ExplicitBounds #3: 40.000000
ExplicitBounds #4: 50.000000
ExplicitBounds #5: 60.000000
ExplicitBounds #6: 70.000000
ExplicitBounds #7: 80.000000
ExplicitBounds #8: 90.000000
ExplicitBounds #9: 100.000000
Buckets #0, Count: 0
Buckets #1, Count: 0
Buckets #2, Count: 0
Buckets #3, Count: 2
Buckets #4, Count: 5
Buckets #5, Count: 0
Buckets #6, Count: 3
Buckets #7, Count: 7
Buckets #8, Count: 2
Buckets #9, Count: 4
Buckets #10, Count: 21
        {"kind": "exporter", "data_type": "metrics", "name": "debug"}
```

### Testing

- Several unit tests have been created. We have also tested by ingesting
and converting exponential histograms from the `statsdreceiver` as well
as directly via the `otlpreceiver` over grpc over several hours with a
large amount of data.

- We have clients that have been running this solution in production for
a number of weeks.

### Readme description:

### convert_exponential_hist_to_explicit_hist

`convert_exponential_hist_to_explicit_hist([ExplicitBounds])`

the `convert_exponential_hist_to_explicit_hist` function converts an
ExponentialHistogram to an Explicit (_normal_) Histogram.

`ExplicitBounds` is represents the list of bucket boundaries for the new
histogram. This argument is __required__ and __cannot be empty__.

__WARNING:__

The process of converting an ExponentialHistogram to an Explicit
Histogram is not perfect and may result in a loss of precision. It is
important to define an appropriate set of bucket boundaries to minimize
this loss. For example, selecting Boundaries that are too high or too
low may result histogram buckets that are too wide or too narrow,
respectively.

---------

Co-authored-by: Kent Quirk <kentquirk@gmail.com>
Co-authored-by: Tyler Helmuth <12352919+TylerHelmuth@users.noreply.github.com>
edma2 pushed a commit to edma2/opentelemetry-collector-contrib that referenced this pull request Oct 3, 2024
* add CI and project.yml

* add caching

* add lint check

* add lint check

* rearrange steps

* rearrange steps and generify CI job for any component

* rearrange steps and generify CI job for any component

* add MANDATORY_REVIEWERS
dmitryax pushed a commit that referenced this pull request Oct 7, 2024
**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

As described at
#35491,
it is useful to provide the option to the users for defining
`receiver_creator`'s templates per container.

In this regard, the current PR introduces a new type of Endpoint called
`PodContainer` that matches the rule type `pod.container`. This Endpoint
is emitted for each container of the Pod similarly to how the `Port`
Endpoints are emitted per container that defines a port.

A complete example on how to use this feature to apply different parsing
on each of the Pod's container is provided in the `How to test this
manually` section.

**Link to tracking Issue:** <Issue number if applicable> Fixes
#35491

**Testing:** <Describe what testing was performed and which tests were
added.> TBA

**Documentation:** <Describe the documentation added.> TBA


### How to test this manually

1. Use the following values file to deploy the Collector's Helm chart
```yaml
mode: daemonset

image:
  repository: otelcontribcol-dev
  tag: "latest"
  pullPolicy: IfNotPresent

command:
  name: otelcontribcol

clusterRole:
  create: true
  rules:
   - apiGroups:
     - ''
     resources:
     - 'pods'
     - 'nodes'
     verbs:
     - 'get'
     - 'list'
     - 'watch'
   - apiGroups: [ "" ]
     resources: [ "nodes/proxy"]
     verbs: [ "get" ]
   - apiGroups:
       - ""
     resources:
       - nodes/stats
     verbs:
       - get
   - nonResourceURLs:
       - "/metrics"
     verbs:
       - get

extraVolumeMounts:
 - name: varlogpods
   mountPath: /var/log/pods
   readOnly: true

extraVolumes:
  - name: varlogpods
    hostPath:
      path: /var/log/pods

config:
  extensions:
    k8s_observer:
      auth_type: serviceAccount
      node: ${env:K8S_NODE_NAME}
      observe_nodes: true
  exporters:
    debug:
      verbosity: basic
  receivers:
    receiver_creator/logs:
      watch_observers: [ k8s_observer ]
      receivers:
        filelog/busybox:
          rule: type == "pod.container" && pod.labels["otel.logs"] == "true" && container_name == "busybox"
          config:
            include:
              - /var/log/pods/`pod.namespace`_`pod.name`_`pod.uid`/`container_name`/*.log
            include_file_name: false
            include_file_path: true
            operators:
              - id: container-parser
                type: container
              - type: add
                field: attributes.log.template
                value: busybox
        filelog/lazybox:
          rule: type == "pod.container" && pod.labels["otel.logs"] == "true" && container_name == "lazybox"
          config:
            include:
              - /var/log/pods/`pod.namespace`_`pod.name`_`pod.uid`/`container_name`/*.log
            include_file_name: false
            include_file_path: true
            operators:
              - id: container-parser
                type: container
              - type: add
                field: attributes.log.template
                value: lazybox
  service:
    extensions: [health_check, k8s_observer]
    pipelines:
      logs:
        receivers: [receiver_creator/logs]
        processors: [batch]
        exporters: [debug]
```
2. Follow the logs of the Collector's Pod i.e: `k logs -f
daemonset-opentelemetry-collector-agent-2hrg5`
3. Deploy a sample Pod which consists of 2 different containers:

```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: daemonset-logs
  labels:
    app: daemonset-logs
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: migration-logger
      otel.logs: "true"
  template:
    metadata:
      labels:
        app.kubernetes.io/component: migration-logger
        otel.logs: "true"
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      containers:
        - name: lazybox
          image: busybox
          args:
            - /bin/sh
            - -c
            - while true; do echo "otel logs at $(date +%H:%M:%S)" && sleep 0.1s; done
        - name: busybox
          image: busybox
          args:
            - /bin/sh
            - -c
            - while true; do echo "otel logs at $(date +%H:%M:%S)" && sleep 0.1s; done
```

Verify in the logs that only 2 filelog receivers are started, one per
container:

```console
2024-10-02T12:05:17.506Z	info	receivercreator@v0.110.0/observerhandler.go:96	starting receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/lazybox", "endpoint": "10.244.0.13", "endpoint_id": "k8s_observer/01543800-cfea-4c10-8220-387e60f65151/lazybox"}
2024-10-02T12:05:17.508Z	info	adapter/receiver.go:47	Starting stanza receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/lazybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/lazybox"}
2024-10-02T12:05:17.508Z	info	receivercreator@v0.110.0/observerhandler.go:96	starting receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/busybox", "endpoint": "10.244.0.13", "endpoint_id": "k8s_observer/01543800-cfea-4c10-8220-387e60f65151/busybox"}
2024-10-02T12:05:17.510Z	info	adapter/receiver.go:47	Starting stanza receiver	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/busybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/busybox"}
2024-10-02T12:05:17.709Z	info	fileconsumer/file.go:256	Started watching file	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/lazybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/lazybox", "component": "fileconsumer", "path": "/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/lazybox/0.log"}
2024-10-02T12:05:17.712Z	info	fileconsumer/file.go:256	Started watching file	{"kind": "receiver", "name": "receiver_creator/logs", "data_type": "logs", "name": "filelog/busybox/receiver_creator/logs{endpoint=\"10.244.0.13\"}/k8s_observer/01543800-cfea-4c10-8220-387e60f65151/busybox", "component": "fileconsumer", "path": "/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/busybox/0.log"}
```

In addition verify that the proper attributes are added per container
according to the 2 different filelog receiver definitions:


```console
2024-10-02T12:23:55.117Z	info	ResourceLog #0
Resource SchemaURL: 
Resource attributes:
     -> k8s.pod.name: Str(daemonset-logs-sz4zk)
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(01543800-cfea-4c10-8220-387e60f65151)
     -> k8s.container.name: Str(lazybox)
     -> k8s.namespace.name: Str(default)
     -> container.id: Str(63a8e69bdc6ee95ee7918baf913a548190f32838adeb0e6189a8210e05157b40)
     -> container.image.name: Str(busybox)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-10-02 12:23:54.896772888 +0000 UTC
Timestamp: 2024-10-02 12:23:54.750904381 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 12:23:54)
Attributes:
     -> log.iostream: Str(stdout)
     -> logtag: Str(F)
     -> log: Map({"template":"lazybox"})
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/lazybox/0.log)
Trace ID: 
Span ID: 
Flags: 0
ResourceLog #1
Resource SchemaURL: 
Resource attributes:
     -> k8s.container.restart_count: Str(0)
     -> k8s.pod.uid: Str(01543800-cfea-4c10-8220-387e60f65151)
     -> k8s.container.name: Str(busybox)
     -> k8s.namespace.name: Str(default)
     -> k8s.pod.name: Str(daemonset-logs-sz4zk)
     -> container.id: Str(47163758424f2bc5382b1e9702301be23cab368b590b5fbf0b30affa09b4a199)
     -> container.image.name: Str(busybox)
ScopeLogs #0
ScopeLogs SchemaURL: 
InstrumentationScope  
LogRecord #0
ObservedTimestamp: 2024-10-02 12:23:54.897788935 +0000 UTC
Timestamp: 2024-10-02 12:23:54.749885634 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(otel logs at 12:23:54)
Attributes:
     -> log.file.path: Str(/var/log/pods/default_daemonset-logs-sz4zk_01543800-cfea-4c10-8220-387e60f65151/busybox/0.log)
     -> logtag: Str(F)
     -> log.iostream: Str(stdout)
     -> log: Map({"template":"busybox"})
Trace ID: 
Span ID: 
Flags: 0
```

Signed-off-by: ChrsMark <chrismarkou92@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants