Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HorizontalPodAutoscaler causes repeated "configured" logs in kustomize log #494

Closed
Legion2 opened this issue Nov 18, 2021 · 10 comments · Fixed by #526
Closed

HorizontalPodAutoscaler causes repeated "configured" logs in kustomize log #494

Legion2 opened this issue Nov 18, 2021 · 10 comments · Fixed by #526

Comments

@Legion2
Copy link

Legion2 commented Nov 18, 2021

I have two HorizontalPodAutoscaler reconciled by a Kustomization. They both are created as expected by the Kustomization, however the follwoing is shown in the events of the Kustomization:

Normal  info  2m40s (x542 over 4d3h)  kustomize-controller  HorizontalPodAutoscaler/foo/hpa configured HorizontalPodAutoscaler/bar/hpa configured

This is the resource yaml:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 2
  maxReplicas: 10
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 3
      - type: Pods
        value: 4
        periodSeconds: 3
      selectPolicy: Max
  metrics:
    - type: External
      external:
        metric:
          name: httpproxy_requests_active
          selector:
            matchLabels:
              my-label: "metric-1"
        target: 
          type: AverageValue
          value: "5"
@stefanprodan
Copy link
Member

Can you please post here the output of kubectl get HorizontalPodAutoscaler hpa --show-managed-fields

@Legion2
Copy link
Author

Legion2 commented Nov 23, 2021

kubectl get HorizontalPodAutoscaler hpa --show-managed-fields -o yaml:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    autoscaling.alpha.kubernetes.io/behavior: '{"ScaleUp":{"StabilizationWindowSeconds":0,"SelectPolicy":"Max","Policies":[{"Type":"Percent","Value":100,"PeriodSeconds":3},{"Type":"Pods","Value":4,"PeriodSeconds":3}]},"ScaleDown":{"StabilizationWindowSeconds":60,"SelectPolicy":"Max","Policies":[{"Type":"Percent","Value":100,"PeriodSeconds":15}]}}'
    autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2021-11-22T09:44:08Z","reason":"ReadyForNewScale","message":"recommended
      size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2021-11-22T09:45:38Z","reason":"ValidMetricFound","message":"the
      HPA was able to successfully calculate a replica count from external metric
      httpproxy_requests_active(\u0026LabelSelector{MatchLabels:map[string]string{my-label:
      metric-1,},MatchExpressions:[]LabelSelectorRequirement{},})"},{"type":"ScalingLimited","status":"True","lastTransitionTime":"2021-11-22T09:45:38Z","reason":"TooFewReplicas","message":"the
      desired replica count is less than the minimum replica count"}]'
    autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"External","external":{"metricName":"httpproxy_requests_active","metricSelector":{"matchLabels":{"my-label":"metric-1"}},"currentValue":"0"}}]'
    autoscaling.alpha.kubernetes.io/metrics: '[{"type":"External","external":{"metricName":"httpproxy_requests_active","metricSelector":{"matchLabels":{"my-label":"metric-1"}},"targetValue":"5"}}]'
  creationTimestamp: "2021-11-22T09:43:53Z"
  labels:
    kustomize.toolkit.fluxcd.io/name: apps
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  managedFields:
  - apiVersion: autoscaling/v2beta2
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        f:behavior:
          f:scaleDown:
            f:policies: {}
            f:stabilizationWindowSeconds: {}
          f:scaleUp:
            f:policies: {}
            f:selectPolicy: {}
            f:stabilizationWindowSeconds: {}
        f:maxReplicas: {}
        f:metrics: {}
        f:minReplicas: {}
        f:scaleTargetRef:
          f:apiVersion: {}
          f:kind: {}
          f:name: {}
      f:status:
        f:conditions: {}
        f:currentMetrics: {}
        f:currentReplicas: {}
        f:desiredReplicas: {}
    manager: kustomize-controller
    operation: Apply
    time: "2021-11-23T13:30:11Z"
  - apiVersion: autoscaling/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:lastScaleTime: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-11-22T17:46:06Z"
  name: hpa
  namespace: foo
  resourceVersion: "324570024"
  uid: 8813043b-bc8d-400a-ba2a-fe90da576bbc
spec:
  maxReplicas: 10
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
status:
  currentReplicas: 2
  desiredReplicas: 2
  lastScaleTime: "2021-11-22T09:44:08Z"

kubectl version:

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-eks-d88609", GitCommit:"d886092805d5cc3a47ed5cf0c43de38ce442dfcb", GitTreeState:"clean", BuildDate:"2021-07-31T00:29:12Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

@stefanprodan
Copy link
Member

Please do kubectl get horizontalpodautoscalers.v2beta2.autoscaling

@Legion2
Copy link
Author

Legion2 commented Nov 23, 2021

kubectl get horizontalpodautoscalers.v2beta2.autoscaling hpa -o yaml --show-managed-fields:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  creationTimestamp: "2021-11-22T09:43:53Z"
  labels:
    kustomize.toolkit.fluxcd.io/name: apps
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  managedFields:
  - apiVersion: autoscaling/v2beta2
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          f:kustomize.toolkit.fluxcd.io/name: {}
          f:kustomize.toolkit.fluxcd.io/namespace: {}
      f:spec:
        f:behavior:
          f:scaleDown:
            f:policies: {}
            f:stabilizationWindowSeconds: {}
          f:scaleUp:
            f:policies: {}
            f:selectPolicy: {}
            f:stabilizationWindowSeconds: {}
        f:maxReplicas: {}
        f:metrics: {}
        f:minReplicas: {}
        f:scaleTargetRef:
          f:apiVersion: {}
          f:kind: {}
          f:name: {}
      f:status:
        f:conditions: {}
        f:currentMetrics: {}
        f:currentReplicas: {}
        f:desiredReplicas: {}
    manager: kustomize-controller
    operation: Apply
    time: "2021-11-23T13:50:37Z"
  - apiVersion: autoscaling/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:lastScaleTime: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-11-22T17:46:06Z"
  name: hpa
  namespace: foo
  resourceVersion: "324594649"
  uid: 8813043b-bc8d-400a-ba2a-fe90da576bbc
spec:
  behavior:
    scaleDown:
      policies:
      - periodSeconds: 15
        type: Percent
        value: 100
      selectPolicy: Max
      stabilizationWindowSeconds: 60
    scaleUp:
      policies:
      - periodSeconds: 3
        type: Percent
        value: 100
      - periodSeconds: 3
        type: Pods
        value: 4
      selectPolicy: Max
      stabilizationWindowSeconds: 0
  maxReplicas: 10
  metrics:
  - external:
      metric:
        name: httpproxy_requests_active
        selector:
          matchLabels:
            my-label: metric-1
      target:
        type: Value
        value: "5"
    type: External
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
status:
  conditions:
  - lastTransitionTime: "2021-11-22T09:44:08Z"
    message: recommended size matches current size
    reason: ReadyForNewScale
    status: "True"
    type: AbleToScale
  - lastTransitionTime: "2021-11-22T09:45:38Z"
    message: 'the HPA was able to successfully calculate a replica count from external
      metric httpproxy_requests_active(&LabelSelector{MatchLabels:map[string]string{my-label:
      metric-1,},MatchExpressions:[]LabelSelectorRequirement{},})'
    reason: ValidMetricFound
    status: "True"
    type: ScalingActive
  - lastTransitionTime: "2021-11-22T09:45:38Z"
    message: the desired replica count is less than the minimum replica count
    reason: TooFewReplicas
    status: "True"
    type: ScalingLimited
  currentMetrics:
  - external:
      current:
        value: "0"
      metric:
        name: httpproxy_requests_active
        selector:
          matchLabels:
            my-label: metric-1
    type: External
  currentReplicas: 2
  desiredReplicas: 2
  lastScaleTime: "2021-11-22T09:44:08Z"

@stefanprodan
Copy link
Member

Hmm I can't reproduce this on Kubernetes v1.21. Can you please copy the spec as it is on the server, paste that in Git and see if the drift stops?

@Legion2
Copy link
Author

Legion2 commented Nov 23, 2021

The difference between the source and the server spec is spec.metrics[0].external.target.type. In the source it is AverageValue and on the server it is Value.

@Legion2
Copy link
Author

Legion2 commented Nov 23, 2021

I think there is a error in my HPA, if the type is AverageValue there must be a key averageValue, however there is only the value key. Maybe kubernets defaults to the Value type because it only detects the value key. I will fix the HPA config and see if it fixes the problem.

@Legion2
Copy link
Author

Legion2 commented Nov 23, 2021

The problem is not fixed by using the spec from the server.

@Legion2
Copy link
Author

Legion2 commented Nov 30, 2021

@stefanprodan This problem is not resolved with flux 0.24.0. Can you please reopen the issue.

@Legion2
Copy link
Author

Legion2 commented Dec 3, 2021

Maybe this is related to kubernetes/kubernetes#74099

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants