Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to determine resource for scale target reference #19940

Closed
mfojtik opened this issue Jun 8, 2018 · 8 comments
Closed

unable to determine resource for scale target reference #19940

mfojtik opened this issue Jun 8, 2018 · 8 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/pod

Comments

@mfojtik
Copy link
Contributor

mfojtik commented Jun 8, 2018

In ca-central-1 I can see a lot of errors in controller manager like this:

1 horizontal.go:189] unable to determine resource for scale target reference: no matches for kind "DeploymentConfig" in group "extensions"
1 horizontal.go:189] unable to determine resource for scale target reference: no matches for kind "ReplicationController" in group "apps"

We do carry a patch here I believe: https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go#L402

We have this so we can handle OpenShift oapi DeploymentConfig, what is not clear to me is why the passed GroupKind is "extensions.DeploymentConfig" (?) and also why we see "apps.ReplicationController" (RC is part of core).

We are getting the GV from here: https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go#L388 (do we have some HPA created with wrong APIVersion?)

/cc @DirectXMan12
/cc @deads2k
/cc @liggitt

@mfojtik
Copy link
Contributor Author

mfojtik commented Jun 8, 2018

OK, I checked and we indeed have some HPA with wrong scaleTargetRef:

...
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: ReplicationController

@DirectXMan12 shouldn't we validate this when the HPA is created and refuse to create HPA that points to an non-existing resource? (well that resource might start to exists later...)...

@liggitt
Copy link
Contributor

liggitt commented Jun 8, 2018

see #18517 (comment) which was supposed to migrate these references

@DirectXMan12
Copy link
Contributor

I know about extensions.DC (the webconsole was doing interesting stuff), but I've never seen apps.RC, and I don't think we handle that, because it was not a known case (webconsole creates everything as extensions.XYZ, so all of those should be handled).

@DirectXMan12
Copy link
Contributor

Any idea what created apps.RC? Could just be a user error.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 1, 2018
@openshift-merge-robot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 31, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/pod
Projects
None yet
Development

No branches or pull requests

8 participants