Skip to content

Commit

Permalink
[k8sclusterreceiver] refactor metric units to follow Otel conventions
Browse files Browse the repository at this point in the history
  • Loading branch information
povilasv committed Sep 15, 2023
1 parent 403626b commit 8dba9f5
Show file tree
Hide file tree
Showing 12 changed files with 146 additions and 119 deletions.
27 changes: 27 additions & 0 deletions .chloggen/k8sclusterreceiver-fix-units.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: 'bug_fix'

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: 'k8sclusterreceiver'

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: "Change k8scluster receiver metric units to follow otel semantic conventions"

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [10553]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [user]
44 changes: 22 additions & 22 deletions receiver/k8sclusterreceiver/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,39 +98,39 @@ The number of actively running jobs for a cronjob
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {job} | Gauge | Int |
### k8s.daemonset.current_scheduled_nodes
Number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {node} | Gauge | Int |
### k8s.daemonset.desired_scheduled_nodes
Number of nodes that should be running the daemon pod (including nodes currently running the daemon pod)
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {node} | Gauge | Int |
### k8s.daemonset.misscheduled_nodes
Number of nodes that are running the daemon pod, but are not supposed to run the daemon pod
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {node} | Gauge | Int |
### k8s.daemonset.ready_nodes
Number of nodes that should be running the daemon pod and have one or more of the daemon pod running and ready
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {node} | Gauge | Int |
### k8s.deployment.available
Expand All @@ -154,71 +154,71 @@ Current number of pod replicas managed by this autoscaler.
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.hpa.desired_replicas
Desired number of pod replicas managed by this autoscaler.
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.hpa.max_replicas
Maximum number of replicas to which the autoscaler can scale up.
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.hpa.min_replicas
Minimum number of replicas to which the autoscaler can scale up.
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.job.active_pods
The number of actively running pods for a job
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.job.desired_successful_pods
The desired number of successfully finished pods the job should be run with
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.job.failed_pods
The number of pods which reached phase Failed for a job
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.job.max_parallel_pods
The max desired number of pods the job should run at any given time
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.job.successful_pods
The number of pods which reached phase Succeeded for a job
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.namespace.phase
Expand All @@ -242,31 +242,31 @@ Total number of available pods (ready for at least minReadySeconds) targeted by
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.replicaset.desired
Number of desired pods in this replicaset
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.replication_controller.available
Total number of available pods (ready for at least minReadySeconds) targeted by this replication_controller
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.replication_controller.desired
Number of desired pods in this replication_controller
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.resource_quota.hard_limit
Expand Down Expand Up @@ -302,31 +302,31 @@ The number of pods created by the StatefulSet controller from the StatefulSet ve
| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |
### k8s.statefulset.desired_pods
Number of desired pods in the stateful set (the `spec.replicas` field)

| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |

### k8s.statefulset.ready_pods

Number of pods created by the stateful set that have the `Ready` condition

| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |

### k8s.statefulset.updated_pods

Number of pods created by the StatefulSet controller from the StatefulSet version

| Unit | Metric Type | Value Type |
| ---- | ----------- | ---------- |
| 1 | Gauge | Int |
| {pod} | Gauge | Int |

### openshift.appliedclusterquota.limit

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ resourceMetrics:
dataPoints:
- asInt: "2"
name: k8s.cronjob.active_jobs
unit: "1"
unit: "{job}"
scope:
name: otelcol/k8sclusterreceiver
version: latest
Original file line number Diff line number Diff line change
Expand Up @@ -21,25 +21,25 @@ resourceMetrics:
dataPoints:
- asInt: "3"
name: k8s.daemonset.current_scheduled_nodes
unit: "1"
unit: "{node}"
- description: Number of nodes that should be running the daemon pod (including nodes currently running the daemon pod)
gauge:
dataPoints:
- asInt: "5"
name: k8s.daemonset.desired_scheduled_nodes
unit: "1"
unit: "{node}"
- description: Number of nodes that are running the daemon pod, but are not supposed to run the daemon pod
gauge:
dataPoints:
- asInt: "1"
name: k8s.daemonset.misscheduled_nodes
unit: "1"
unit: "{node}"
- description: Number of nodes that should be running the daemon pod and have one or more of the daemon pod running and ready
gauge:
dataPoints:
- asInt: "2"
name: k8s.daemonset.ready_nodes
unit: "1"
unit: "{node}"
scope:
name: otelcol/k8sclusterreceiver
version: latest
10 changes: 5 additions & 5 deletions receiver/k8sclusterreceiver/internal/jobs/testdata/expected.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,31 +21,31 @@ resourceMetrics:
dataPoints:
- asInt: "2"
name: k8s.job.active_pods
unit: "1"
unit: "{pod}"
- description: The number of pods which reached phase Failed for a job
gauge:
dataPoints:
- asInt: "0"
name: k8s.job.failed_pods
unit: "1"
unit: "{pod}"
- description: The number of pods which reached phase Succeeded for a job
gauge:
dataPoints:
- asInt: "3"
name: k8s.job.successful_pods
unit: "1"
unit: "{pod}"
- description: The desired number of successfully finished pods the job should be run with
gauge:
dataPoints:
- asInt: "10"
name: k8s.job.desired_successful_pods
unit: "1"
unit: "{pod}"
- description: The max desired number of pods the job should run at any given time
gauge:
dataPoints:
- asInt: "2"
name: k8s.job.max_parallel_pods
unit: "1"
unit: "{pod}"
scope:
name: otelcol/k8sclusterreceiver
version: latest
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,19 @@ resourceMetrics:
dataPoints:
- asInt: "2"
name: k8s.job.active_pods
unit: "1"
unit: "{pod}"
- description: The number of pods which reached phase Failed for a job
gauge:
dataPoints:
- asInt: "0"
name: k8s.job.failed_pods
unit: "1"
unit: "{pod}"
- description: The number of pods which reached phase Succeeded for a job
gauge:
dataPoints:
- asInt: "3"
name: k8s.job.successful_pods
unit: "1"
unit: "{pod}"
scope:
name: otelcol/k8sclusterreceiver
version: latest
Expand Down
Loading

0 comments on commit 8dba9f5

Please sign in to comment.