Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extended test flake due to user not having permissions #8491

Closed
smarterclayton opened this issue Apr 13, 2016 · 3 comments
Closed

Extended test flake due to user not having permissions #8491

smarterclayton opened this issue Apr 13, 2016 · 3 comments
Assignees
Labels
kind/test-flake Categorizes issue or PR as related to test flakes. priority/P2

Comments

@smarterclayton
Copy link
Contributor

https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/148/consoleFull

STEP: Building a namespace api object
Apr 12 20:16:53.169: INFO: configPath is now "/tmp/extended-test-local-quota-wjr2q-gsocr-user.kubeconfig"
Apr 12 20:16:53.169: INFO: The user is now "extended-test-local-quota-wjr2q-gsocr-user"
Apr 12 20:16:53.169: INFO: Creating project "extended-test-local-quota-wjr2q-gsocr"
STEP: Waiting for a default service account to be provisioned in namespace
Apr 12 20:16:53.428: INFO: Waiting up to 2m0s for service account default to be provisioned in ns extended-test-local-quota-wjr2q-gsocr
Apr 12 20:16:53.429: INFO: Service account default in ns extended-test-local-quota-wjr2q-gsocr with secrets found. (1.286372ms)
STEP: make sure volume directory (/mnt/openshift-xfs-vol-dir) is on an XFS filesystem
STEP: lookup test projects fsGroup ID
Apr 12 20:16:53.431: INFO: Running 'oc get --namespace=extended-test-local-quota-wjr2q-gsocr --config=/tmp/extended-test-local-quota-wjr2q-gsocr-user.kubeconfig project extended-test-local-quota-wjr2q-gsocr --template='{{ index .metadata.annotations "openshift.io/sa.scc.supplemental-groups" }}''
Apr 12 20:16:53.515: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc get --namespace=extended-test-local-quota-wjr2q-gsocr --config=/tmp/extended-test-local-quota-wjr2q-gsocr-user.kubeconfig project extended-test-local-quota-wjr2q-gsocr --template='{{ index .metadata.annotations "openshift.io/sa.scc.supplemental-groups" }}'] []   Error from server: User "extended-test-local-quota-wjr2q-gsocr-user" cannot get projects in project "extended-test-local-quota-wjr2q-gsocr"
 Error from server: User "extended-test-local-quota-wjr2q-gsocr-user" cannot get projects in project "extended-test-local-quota-wjr2q-gsocr"
 [] <nil> 0xc208a9cec0 exit status 1 <nil> true [0xc2088b9850 0xc2088b98e8 0xc2088b98e8] [0xc2088b9850 0xc2088b98e8] [0xc2088b9858 0xc2088b98e0] [0x8fd470 0x8fd590] 0xc208a9ac00}:
Error from server: User "extended-test-local-quota-wjr2q-gsocr-user" cannot get projects in project "extended-test-local-quota-wjr2q-gsocr"
STEP: Collecting events from namespace "extended-test-local-quota-wjr2q-gsocr".
Apr 12 20:16:53.529: INFO: POD                      NODE           PHASE    GRACE  CONDITIONS
Apr 12 20:16:53.529: INFO: docker-registry-1-uche2  172.18.14.146  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-04-12 20:07:25 -0400 EDT  }]
Apr 12 20:16:53.529: INFO: router-1-deploy          172.18.14.146  Failed          []
Apr 12 20:16:53.529: INFO: router-2-6850m           172.18.14.146  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-04-12 20:08:08 -0400 EDT  }]
Apr 12 20:16:53.529: INFO: redis-master-sj62r       172.18.14.146  Running  30s    [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-04-12 20:16:50 -0400 EDT  }]
Apr 12 20:16:53.529: INFO: liveness-exec            172.18.14.146  Running         [{Ready True 0001-01-01 00:00:00 +0000 UTC 2016-04-12 20:15:41 -0400 EDT  }]
Apr 12 20:16:53.529: INFO: 
Apr 12 20:16:53.532: INFO: 
Logging node info for node 172.18.14.146

@deads2k looks like a policy cache race, or possibly the permission assignment flake @ncdc spotted

@smarterclayton smarterclayton added the kind/test-flake Categorizes issue or PR as related to test flakes. label Apr 13, 2016
@dgoodwin
Copy link
Contributor

@smarterclayton I assume this surfaced in #8483, does anyone know if it has surfaced before? Is it limited to this specific extended test or does it appear in other places?

Am I correct in assuming there is no way to reproduce reliably?

The failure is happening here: https://github.com/openshift/origin/blob/master/test/extended/localquota/local_fsgroup_quota.go#L25 As far as I can tell the test is correct but I'm curious what these other issues are alluded to above. (policy race cace, permission assignment flake) Could anyone point me to info on these?

From openshift.log:

I0412 20:16:53.150465   17747 anyauthpassword.go:40] Got userIdentityMapping: &user.DefaultInfo{Name:"extended-test-local-quota-wjr2q-gsocr-user", UID:"06318d66-010d-11e6-a3fa-0e73eddc23d3", Groups:[]string(nil)}
I0412 20:16:53.150490   17747 basicauth.go:45] Login with provider "anypassword" succeeded for login "extended-test-local-quota-wjr2q-gsocr-user": &user.DefaultInfo{Name:"extended-test-local-quota-wjr2q-gsocr-user", UID:"06318d66-010d-11e6-a3fa-0e73eddc23d3", Groups:[]string(nil)}
I0412 20:16:53.150508   17747 authenticator.go:38] OAuth authentication succeeded: &user.DefaultInfo{Name:"extended-test-local-quota-wjr2q-gsocr-user", UID:"06318d66-010d-11e6-a3fa-0e73eddc23d3", Groups:[]string(nil)}
I0412 20:16:53.181038   17747 namespace_controller.go:162] Finished syncing namespace "extended-test-local-quota-wjr2q-gsocr" (633ns)
I0412 20:16:53.217385   17747 namespace_controller.go:162] Finished syncing namespace "extended-test-local-quota-wjr2q-gsocr" (627ns)
I0412 20:16:53.289128   17747 create_dockercfg_secrets.go:161] View of ServiceAccount extended-test-local-quota-wjr2q-gsocr/builder is not up to date, skipping dockercfg creation
I0412 20:16:53.350318   17747 create_dockercfg_secrets.go:161] View of ServiceAccount extended-test-local-quota-wjr2q-gsocr/deployer is not up to date, skipping dockercfg creation
I0412 20:16:53.384467   17747 manager.go:1400] Container "a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c e2e-tests-kubectl-2blzh/redis-master-sj62r" exited after 277.964817ms
I0412 20:16:53.406285   17747 create_dockercfg_secrets.go:161] View of ServiceAccount extended-test-local-quota-wjr2q-gsocr/default is not up to date, skipping dockercfg creation
I0412 20:16:53.427642   17747 trace.go:57] Trace "Create /oapi/v1/projectrequests" (started 2016-04-12 20:16:53.171485594 -0400 EDT):
[15.231µs] [15.231µs] About to convert to expected version
[52.785µs] [37.554µs] Conversion done
[61.256µs] [8.471µs] About to store object in database
[256.069048ms] [256.007792ms] Object stored in database
[256.074949ms] [5.901µs] Self-link added
[256.127753ms] [52.804µs] END

If the timestamps can be relied on we've failed before this section:

I0412 20:16:53.539693   17747 proxy.go:178] [43d814e16ef5b6e8] Beginning proxy /api/v1/proxy/nodes/172.18.14.146:10250/runningpods...
I0412 20:16:53.539916   17747 server.go:1100] GET /runningpods: (18.005µs) 301 [[extended.test/v1.2.0 (linux/amd64) kubernetes/eb3e7a8] 172.18.14.146:54398]
I0412 20:16:53.540110   17747 proxy.go:180] [43d814e16ef5b6e8] Proxy /api/v1/proxy/nodes/172.18.14.146:10250/runningpods finished 425.652µs.
I0412 20:16:53.542549   17747 proxy.go:178] [16c4bd14e0669a05] Beginning proxy /api/v1/proxy/nodes/172.18.14.146:10250/runningpods/...
I0412 20:16:53.542876   17747 node_auth.go:142] Node request attributes: namespace=, user=system:openshift-node-admin, groups=[system:node-admins system:authenticated], attrs=authorizer.DefaultAuthorizationAttributes{Verb:"proxy", APIVersion:"v1", APIGroup:"", Resource:"nodes", ResourceName:"172.18.14.146", RequestAttributes:interface {}(nil), NonResourceURL:false, URL:"/runningpods/"}
I0412 20:16:53.544227   17747 authorizer.go:74] allowed=true, reason=allowed by cluster rule
I0412 20:16:53.547001   17747 server.go:1100] GET /runningpods/: (4.315465ms) 200 [[extended.test/v1.2.0 (linux/amd64) kubernetes/eb3e7a8] 172.18.14.146:54398]
I0412 20:16:53.547167   17747 proxy.go:180] [16c4bd14e0669a05] Proxy /api/v1/proxy/nodes/172.18.14.146:10250/runningpods/ finished 4.621385ms.
I0412 20:16:53.550944   17747 proxy.go:178] [6e5fb0c7feddbfb0] Beginning proxy /api/v1/proxy/nodes/172.18.14.146:10250/metrics...
I0412 20:16:53.551249   17747 node_auth.go:142] Node request attributes: namespace=, user=system:openshift-node-admin, groups=[system:node-admins system:authenticated], attrs=authorizer.DefaultAuthorizationAttributes{Verb:"get", APIVersion:"v1", APIGroup:"", Resource:"nodes/metrics", ResourceName:"172.18.14.146", RequestAttributes:interface {}(nil), NonResourceURL:false, URL:"/metrics"}
I0412 20:16:53.552640   17747 authorizer.go:74] allowed=true, reason=allowed by cluster rule
I0412 20:16:53.753628   17747 kubelet.go:2443] SyncLoop (SYNC): 1 pods; router-1-deploy_default(b1b5b45d-010b-11e6-a3fa-0e73eddc23d3)
I0412 20:16:53.846903   17747 docker.go:363] Docker Container: /happy_mietner is not managed by kubelet.
I0412 20:16:53.847104   17747 generic.go:138] GenericPLEG: fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/61f1cf7398d7015526ab0a9e5269050a916220ffce386b5367ff3b385b80b75f: running -> exited
I0412 20:16:53.847124   17747 generic.go:138] GenericPLEG: fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c: running -> exited
I0412 20:16:53.853946   17747 manager.go:324] Container inspect result: {ID:61f1cf7398d7015526ab0a9e5269050a916220ffce386b5367ff3b385b80b75f Created:2016-04-13 00:16:49.638497489 +0000 UTC Path:/entrypoint.sh Args:[redis-server] Config:0xc825a7e1c0 State:{Running:false Paused:false Restarting:false OOMKilled:false Pid:0 ExitCode:0 Error: StartedAt:2016-04-13 00:16:50.437314304 +0000 UTC FinishedAt:2016-04-13 00:16:53.011164634 +0000 UTC} Image:0ff3a40a6246082d03676dd481f42ba65100be7ab823385cda64e6b2bfbe9dd5 Node:<nil> NetworkSettings:0xc82551fc00 SysInitPath: ResolvConfPath:/var/lib/docker/containers/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c/resolv.conf HostnamePath:/var/lib/docker/containers/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c/hostname HostsPath:/mnt/openshift-xfs-vol-dir/pods/fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/etc-hosts LogPath:/var/lib/docker/containers/61f1cf7398d7015526ab0a9e5269050a916220ffce386b5367ff3b385b80b75f/61f1cf7398d7015526ab0a9e5269050a916220ffce386b5367ff3b385b80b75f-json.log Name:/k8s_redis-master.4ede9f18_redis-master-sj62r_e2e-tests-kubectl-2blzh_fa0a6f90-010c-11e6-a3fa-0e73eddc23d3_348bd344 Driver:devicemapper Mounts:[{Name:660b0fc062519517250936a0b3a5e2495317da2e51c05f4d4f690623ac4c560f Source:/var/lib/docker/volumes/660b0fc062519517250936a0b3a5e2495317da2e51c05f4d4f690623ac4c560f/_data Destination:/data Driver:local Mode: RW:true} {Name: Source:/mnt/openshift-xfs-vol-dir/pods/fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/volumes/kubernetes.io~secret/default-token-2c4n6 Destination:/var/run/secrets/kubernetes.io/serviceaccount Driver: Mode:ro,Z RW:false} {Name: Source:/mnt/openshift-xfs-vol-dir/pods/fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/etc-hosts Destination:/etc/hosts Driver: Mode: RW:true} {Name: Source:/mnt/openshift-xfs-vol-dir/pods/fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/containers/redis-master/348bd344 Destination:/dev/termination-log Driver: Mode: RW:true}] Volumes:map[] VolumesRW:map[] HostConfig:0xc82c4e4000 ExecIDs:[] RestartCount:0 AppArmorProfile:}
I0412 20:16:53.855230   17747 manager.go:324] Container inspect result: {ID:a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c Created:2016-04-13 00:16:32.894507845 +0000 UTC Path:/pod Args:[] Config:0xc825a7e700 State:{Running:false Paused:false Restarting:false OOMKilled:false Pid:0 ExitCode:0 Error: StartedAt:2016-04-13 00:16:33.833016794 +0000 UTC FinishedAt:2016-04-13 00:16:53.138682718 +0000 UTC} Image:0ae3e12637ff597995efe913b5827724dd0d12e7806e706eb30dc4a23adfcb25 Node:<nil> NetworkSettings:0xc82551ff00 SysInitPath: ResolvConfPath:/var/lib/docker/containers/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c/resolv.conf HostnamePath:/var/lib/docker/containers/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c/hostname HostsPath:/var/lib/docker/containers/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c/hosts LogPath:/var/lib/docker/containers/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c/a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c-json.log Name:/k8s_POD.4263037f_redis-master-sj62r_e2e-tests-kubectl-2blzh_fa0a6f90-010c-11e6-a3fa-0e73eddc23d3_e089f97c Driver:devicemapper Mounts:[] Volumes:map[] VolumesRW:map[] HostConfig:0xc82c590b00 ExecIDs:[] RestartCount:0 AppArmorProfile:}
I0412 20:16:53.855351   17747 generic.go:299] PLEG: Write status for redis-master-sj62r/e2e-tests-kubectl-2blzh: &{ID:fa0a6f90-010c-11e6-a3fa-0e73eddc23d3 Name:redis-master-sj62r Namespace:e2e-tests-kubectl-2blzh IP: ContainerStatuses:[0xc8254d8d20 0xc8254d9340]} (err: <nil>)
I0412 20:16:53.855407   17747 kubelet.go:2430] SyncLoop (PLEG): "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)", event: &pleg.PodLifecycleEvent{ID:"fa0a6f90-010c-11e6-a3fa-0e73eddc23d3", Type:"ContainerDied", Data:"61f1cf7398d7015526ab0a9e5269050a916220ffce386b5367ff3b385b80b75f"}
I0412 20:16:53.858494   17747 kubelet.go:2430] SyncLoop (PLEG): "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)", event: &pleg.PodLifecycleEvent{ID:"fa0a6f90-010c-11e6-a3fa-0e73eddc23d3", Type:"ContainerDied", Data:"a2a4b0a54c1a2a7e299cc86878a95234d97b270f87aa936a90fa897d9b6d245c"}
I0412 20:16:53.858606   17747 kubelet.go:3258] Generating status for "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)"
I0412 20:16:53.870151   17747 manager.go:386] Status for pod "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)" updated successfully: {status:{Phase:Running Conditions:[{Type:Ready Status:False LastProbeTime:{Time:{sec:0 nsec:0 loc:0x50f2920}} LastTransitionTime:{Time:{sec:63596103413 nsec:0 loc:0x5b87f20}} Reason:ContainersNotReady Message:containers with unready status: [redis-master]}] Message: Reason: HostIP:172.18.14.146 PodIP: StartTime:0xc82224d6e0 ContainerStatuses:[{Name:redis-master State:{Waiting:<nil> Running:<nil> Terminated:0xc82e7109a0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:redis ImageID:docker://0ff3a40a6246082d03676dd481f42ba65100be7ab823385cda64e6b2bfbe9dd5 ContainerID:docker://61f1cf7398d7015526ab0a9e5269050a916220ffce386b5367ff3b385b80b75f}]} version:4 podName:redis-master-sj62r podNamespace:e2e-tests-kubectl-2blzh}
I0412 20:16:53.872604   17747 server.go:1100] GET /metrics: (321.50787ms) 200 [[extended.test/v1.2.0 (linux/amd64) kubernetes/eb3e7a8] 172.18.14.146:54398]
I0412 20:16:53.873149   17747 config.go:269] Setting pods for source api
I0412 20:16:53.874066   17747 kubelet.go:2417] SyncLoop (RECONCILE, "api"): "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)"
I0412 20:16:53.874581   17747 controller.go:346] Pod redis-master-sj62r updated.
I0412 20:16:53.874667   17747 controller.go:277] No daemon sets found for pod redis-master-sj62r, daemon set controller will avoid syncing
I0412 20:16:53.875232   17747 replication_controller.go:238] No controllers found for pod redis-master-sj62r, replication manager will avoid syncing
I0412 20:16:53.875871   17747 controller.go:154] No jobs found for pod redis-master-sj62r, job controller will avoid syncing
I0412 20:16:53.877881   17747 proxy.go:180] [6e5fb0c7feddbfb0] Proxy /api/v1/proxy/nodes/172.18.14.146:10250/metrics finished 326.938694ms.
I0412 20:16:53.942730   17747 resource_quota_controller.go:148] Resource quota controller queued all resource quota for full calculation of usage
I0412 20:16:53.942839   17747 resource_quota_controller.go:148] Resource quota controller queued all resource quota for full calculation of usage
I0412 20:16:53.976813   17747 manager.go:400] Pod "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)" fully terminated and removed from etcd
I0412 20:16:53.977382   17747 controller.go:346] Pod redis-master-sj62r updated.
I0412 20:16:53.977429   17747 controller.go:277] No daemon sets found for pod redis-master-sj62r, daemon set controller will avoid syncing
I0412 20:16:53.977446   17747 controller.go:382] Pod redis-master-sj62r deleted.
I0412 20:16:53.977469   17747 controller.go:277] No daemon sets found for pod redis-master-sj62r, daemon set controller will avoid syncing
I0412 20:16:53.977512   17747 config.go:269] Setting pods for source api
I0412 20:16:53.977905   17747 replication_controller.go:238] No controllers found for pod redis-master-sj62r, replication manager will avoid syncing
I0412 20:16:53.977923   17747 replication_controller.go:366] Pod e2e-tests-kubectl-2blzh/redis-master-sj62r deleted through k8s.io/kubernetes/pkg/controller/replication.(*ReplicationManager).(k8s.io/kubernetes/pkg/controller/replication.deletePod)-fm, timestamp 2016-04-12 20:16:53 -0400 EDT, labels map[app:redis role:master].
I0412 20:16:53.977967   17747 replication_controller.go:238] No controllers found for pod redis-master-sj62r, replication manager will avoid syncing
I0412 20:16:53.978380   17747 config.go:269] Setting pods for source api
I0412 20:16:53.979096   17747 kubelet.go:2411] SyncLoop (UPDATE, "api"): "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)"
I0412 20:16:53.979134   17747 kubelet.go:2414] SyncLoop (REMOVE, "api"): "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)"
I0412 20:16:53.979169   17747 kubelet.go:2558] Failed to delete pod "redis-master-sj62r_e2e-tests-kubectl-2blzh(fa0a6f90-010c-11e6-a3fa-0e73eddc23d3)", err: pod not found
I0412 20:16:53.979433   17747 controller.go:154] No jobs found for pod redis-master-sj62r, job controller will avoid syncing
I0412 20:16:53.979451   17747 controller.go:154] No jobs found for pod redis-master-sj62r, job controller will avoid syncing
I0412 20:16:54.031635   17747 namespace_controller_utils.go:317] namespace controller - deleteAllContent - namespace: extended-test-local-quota-wjr2q-gsocr, gvrs: [{autoscaling v1 horizontalpodautoscalers} {batch v1 jobs} {extensions v1beta1 daemonsets} {extensions v1beta1 deployments} {extensions v1beta1 horizontalpodautoscalers} {extensions v1beta1 ingresses} {extensions v1beta1 jobs} {extensions v1beta1 replicasets} {extensions v1beta1 replicationcontrollers} { v1 bindings} { v1 configmaps} { v1 endpoints} { v1 events} { v1 limitranges} { v1 persistentvolumeclaims} { v1 pods} { v1 podtemplates} { v1 replicationcontrollers} { v1 resourcequotas} { v1 secrets} { v1 serviceaccounts} { v1 services}]
E0412 20:16:54.137193   17747 create_dockercfg_secrets.go:116] secrets "builder-token-esnbq" is forbidden: Unable to create new content in namespace extended-test-local-quota-wjr2q-gsocr because it is being terminated.
E0412 20:16:54.139962   17747 tokens_controller.go:171] secrets "builder-token-k8spb" is forbidden: Unable to create new content in namespace extended-test-local-quota-wjr2q-gsocr because it is being terminated.
E0412 20:16:54.158497   17747 create_dockercfg_secrets.go:116] secrets "default-token-3sgml" is forbidden: Unable to create new content in namespace extended-test-local-quota-wjr2q-gsocr because it is being terminated.
E0412 20:16:54.168168   17747 create_dockercfg_secrets.go:116] secrets "deployer-token-m1g7j" is forbidden: Unable to create new content in namespace extended-test-local-quota-wjr2q-gsocr because it is being terminated.
I0412 20:16:54.172242   17747 tokens_controller.go:189] Deleting secret extended-test-local-quota-wjr2q-gsocr/default-token-bablx because service account default was deleted
E0412 20:16:54.174866   17747 tokens_controller.go:191] Error deleting secret extended-test-local-quota-wjr2q-gsocr/default-token-bablx: secrets "default-token-bablx" not found
I0412 20:16:54.174881   17747 tokens_controller.go:189] Deleting secret extended-test-local-quota-wjr2q-gsocr/default-token-zys48 because service account default was deleted
E0412 20:16:54.176086   17747 tokens_controller.go:191] Error deleting secret extended-test-local-quota-wjr2q-gsocr/default-token-zys48: secrets "default-token-zys48" not found
I0412 20:16:54.176117   17747 tokens_controller.go:189] Deleting secret extended-test-local-quota-wjr2q-gsocr/deployer-token-rp4yr because service account deployer was deleted
E0412 20:16:54.177310   17747 tokens_controller.go:191] Error deleting secret extended-test-local-quota-wjr2q-gsocr/deployer-token-rp4yr: secrets "deployer-token-rp4yr" not found
I0412 20:16:54.177324   17747 tokens_controller.go:189] Deleting secret extended-test-local-quota-wjr2q-gsocr/deployer-token-rxmyj because service account deployer was deleted
E0412 20:16:54.178526   17747 tokens_controller.go:191] Error deleting secret extended-test-local-quota-wjr2q-gsocr/deployer-token-rxmyj: secrets "deployer-token-rxmyj" not found
I0412 20:16:54.181050   17747 namespace_controller_utils.go:328] namespace controller - deleteAllContent - namespace: extended-test-local-quota-wjr2q-gsocr, estimate: 0
I0412 20:16:54.193595   17747 namespace_controller.go:162] Finished syncing namespace "extended-test-local-quota-wjr2q-gsocr" (864ns)
I0412 20:16:54.196325   17747 namespace_controller.go:162] Finished syncing namespace "extended-test-local-quota-wjr2q-gsocr" (441ns)
E0412 20:16:54.267163   17747 tokens_controller.go:259] serviceaccounts "builder" not found
I0412 20:16:54.570052   17747 reflector.go:366] pkg/kubelet/config/apiserver.go:43: Watch close - *api.Pod total 626 items received
I0412 20:16:54.744114   17747 kubelet.go:2443] SyncLoop (SYNC): 1 pods; router-1-deploy_default(b1b5b45d-010b-11e6-a3fa-0e73eddc23d3)
I0412 20:16:54.744152   17747 kubelet.go:2465] SyncLoop (housekeeping)
I0412 20:16:54.746954   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~empty-dir/registry-storage of pod af2d09b6-010b-11e6-a3fa-0e73eddc23d3
I0412 20:16:54.746988   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~secret/registry-token-x92gp of pod af2d09b6-010b-11e6-a3fa-0e73eddc23d3
I0412 20:16:54.747037   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~secret/deployer-token-y4rzw of pod b1b5b45d-010b-11e6-a3fa-0e73eddc23d3
I0412 20:16:54.747113   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~secret/server-certificate of pod bb6faa6c-010b-11e6-a3fa-0e73eddc23d3
I0412 20:16:54.747130   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~secret/router-token-j76va of pod bb6faa6c-010b-11e6-a3fa-0e73eddc23d3
I0412 20:16:54.747174   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-gw4fh of pod da6fa059-010c-11e6-a3fa-0e73eddc23d3
I0412 20:16:54.747217   17747 volumes.go:234] Making a volume.Cleaner for volume kubernetes.io~secret/default-token-2c4n6 of pod fa0a6f90-010c-11e6-a3fa-0e73eddc23d3
W0412 20:16:54.747243   17747 kubelet.go:1995] Orphaned volume "fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/default-token-2c4n6" found, tearing down volume
I0412 20:16:54.747724   17747 secret.go:223] Tearing down volume default-token-2c4n6 for pod fa0a6f90-010c-11e6-a3fa-0e73eddc23d3 at /mnt/openshift-xfs-vol-dir/pods/fa0a6f90-010c-11e6-a3fa-0e73eddc23d3/volumes/kubernetes.io~secret/default-token-2c4n6
I0412 20:16:54.757986   17747 kubelet.go:1921] Orphaned pod "fa0a6f90-010c-11e6-a3fa-0e73eddc23d3" found, removing

@smarterclayton
Copy link
Contributor Author

I think this may just be a general flaw with our e2e tests, they need to use something similar to https://github.com/openshift/origin/blob/master/test/util/policy.go#L20 in all e2e tests (to ensure permissions converge before use)

@smarterclayton smarterclayton changed the title FSGroup test flake getting groups Extended test flake due to user not having permissions Apr 18, 2016
@smarterclayton smarterclayton assigned deads2k and unassigned dgoodwin Apr 20, 2016
@smarterclayton
Copy link
Contributor Author

Nm, put a fix up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/test-flake Categorizes issue or PR as related to test flakes. priority/P2
Projects
None yet
Development

No branches or pull requests

4 participants