You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Some k8s workload types are marked as replicaset, whose actual value is deployment.
How to reproduce?
Hard to reproduce this in a real environment. But here is an example to show why this happened.
I add some debug Info and manual sleep in collector/metadata/kubernetes/pod_watch.go and replicaset_watch.go.
Here I manually stopped the container of POD named bookdemo-d796bbdbf-jm8cv. And check what happens in k8s_watcher.
The result below shows that when the update of ReplicaSet and pod occurs synchronously, the pod may not be able to obtain the deployment information corresponding to its RS.
Debug: POD UPDATE, Name: bookdemo-d796bbdbf-jm8cv
Debug: POD Delete in cache, Name: bookdemo-d796bbdbf-jm8cv
Debug: POD Add in cache, Name: bookdemo-d796bbdbf-jm8cv
Debug: POD Add Manual sleep for 2s if pod.Name contains bookdemo
Debug: RS Update, Name: bookdemo-d796bbdbf
Debug: RS Delete in cache, Name: bookdemo-d796bbdbf
Debug: RS Add Manual sleep for 5s if rs.Name contains bookdemo
Debug: POD Add Faild to find controller for rs bookdemo-d796bbdbf
Debug: POD UPDATE, Name: bookdemo-d796bbdbf-jm8cv
Debug: POD Delete in cache, Name: bookdemo-d796bbdbf-jm8cv
Debug: POD Add in cache, Name: bookdemo-d796bbdbf-jm8cv
Debug: POD Add Manual sleep for 2s if pod.Name contains bookdemo
Debug: POD Add Faild to find controller for rs bookdemo-d796bbdbf
Debug: RS Add in cache, Name: bookdemo-d796bbdbf
Debug: RS Update, Name: bookdemo-d796bbdbf
Debug: RS Delete, Name: bookdemo-d796bbdbf
Debug: RS Add Manual sleep for 5s if rs.Name contains bookdemo
Debug: RS Add, Name: bookdemo-d796bbdbf
What did you expect to see?
Whatever happens, a pod managed by any deployment should be marked as deployment in the field of workload_kind, and the field of workload_name should be the deployment's name.
The text was updated successfully, but these errors were encountered:
* fix: add mux durning searching replicaSet's Owner
Signed-off-by: niejiangang <niejiangang@harmonycloud.cn>
* test: add a testcase named OnAddPodWhileReplicaSetUpdating
link #229
Signed-off-by: niejiangang <niejiangang@harmonycloud.cn>
* test: a simple mistake in test case
Signed-off-by: niejiangang <niejiangang@harmonycloud.cn>
Describe the bug
Some k8s workload types are marked as
replicaset
, whose actual value isdeployment
.How to reproduce?
Hard to reproduce this in a real environment. But here is an example to show why this happened.
I add some debug Info and manual sleep in
collector/metadata/kubernetes/pod_watch.go
andreplicaset_watch.go
.Here I manually stopped the container of POD named
bookdemo-d796bbdbf-jm8cv
. And check what happens in k8s_watcher.The result below shows that when the update of
ReplicaSet
and pod occurs synchronously, the pod may not be able to obtain the deployment information corresponding to its RS.Add this is what I got in the '/metrics' API:
What did you expect to see?
Whatever happens, a pod managed by any deployment should be marked as
deployment
in the field ofworkload_kind
, and the field ofworkload_name
should be the deployment's name.The text was updated successfully, but these errors were encountered: