Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU-Manager-For-Kubernetes fails to launch when using kubeconfig multus #14

Closed
lmdaly opened this issue Aug 3, 2017 · 2 comments
Closed
Assignees
Labels

Comments

@lmdaly
Copy link
Contributor

lmdaly commented Aug 3, 2017

I am trying to use CPU-Manager-For-Kubernetes with Mutlus Kubeconfig & TPR - however the annotations are not being propagated to pods created by CMK and Mutlus errors out dues to the lack of annotations. Below is the output of CMK cluster init & Mutlus. CMK works when running with Mutlus Delegates and if deployed manually with annotations in all pods.

CMK init-install-discover pods were not able to come up.
Following is the output while deploying CMK with flannel network object.

$ kubectl get pods
NAME                                          READY     STATUS     RESTARTS   AGE
kcm-cluster-init-pod                          1/1       Running    0          10m
kcm-init-install-discover-pod-192.168.14.38   0/2       Init:0/1   0          10m

$ kubectl describe pods kcm-init-install-discover-pod-192.168.14.38
Name:           kcm-init-install-discover-pod-192.168.14.38
Namespace:      default
Node:           192.168.14.38/192.168.14.38
Start Time:     Fri, 21 Jul 2017 10:04:57 -0400
Labels:         <none>
Status:         Pending
IP:
Controllers:    <none>
Init Containers:
  init:
    Container ID:
    Image:              96.118.23.36:8089/kcm:v0.6.1
    Image ID:
    Port:
    Command:
      /bin/bash
      -c
    Args:
      /kcm/kcm.py init --num-dp-cores=2 --num-cp-cores=1
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /etc/kcm from kcm-conf-dir (rw)
      /host/proc from host-proc (ro)
      /opt/bin from kcm-install-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlbk8 (ro)
    Environment Variables:
      KCM_PROC_FS:      /host/proc
Containers:
  install:
    Container ID:
    Image:              96.118.23.36:8089/kcm:v0.6.1
    Image ID:
    Port:
    Command:
      /bin/bash
      -c
    Args:
      /kcm/kcm.py install
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /etc/kcm from kcm-conf-dir (rw)
      /host/proc from host-proc (ro)
      /opt/bin from kcm-install-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlbk8 (ro)
    Environment Variables:
      KCM_PROC_FS:      /host/proc
      NODE_NAME:         (v1:spec.nodeName)
  discover:
    Container ID:
    Image:              96.118.23.36:8089/kcm:v0.6.1
    Image ID:
    Port:
    Command:
      /bin/bash
      -c
    Args:
      /kcm/kcm.py discover
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /etc/kcm from kcm-conf-dir (rw)
      /host/proc from host-proc (ro)
      /opt/bin from kcm-install-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlbk8 (ro)
    Environment Variables:
      KCM_PROC_FS:      /host/proc
      NODE_NAME:         (v1:spec.nodeName)
Conditions:
  Type          Status
  Initialized   False
  Ready         False
  PodScheduled  True
Volumes:
  host-proc:
    Type:       HostPath (bare host directory volume)
    Path:       /proc
  kcm-conf-dir:
    Type:       HostPath (bare host directory volume)
    Path:       /etc/kcm
  kcm-install-dir:
    Type:       HostPath (bare host directory volume)
    Path:       /opt/bin
  default-token-dlbk8:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-dlbk8
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason          Message
  ---------     --------        -----   ----                    -------------   --------        ------          -------
  1h            1s              1832    {kubelet 192.168.14.38}                 Warning         FailedSync      Error syncing pod, skipping: failed to "SetupNetwork" for "kcm-init-install-discover-pod-192.168.14.38_default" with SetupNetworkError: "Failed to setup network for pod \"kcm-init-install-discover-pod-192.168.14.38_default(9451904f-6e1d-11e7-aa74-ecb1d7851f72)\" using network plugins \"cni\": Multus: Err in getting k8s network from pod: parsePodNetworkObject: pod annotation not having \"network\" as key, refer Multus README.md for the usage guide; Skipping pod"

$

Network pod annotation is specified in kcm-cluster-init-pod pod spec. However the same seems to be not inherited into kcm-init-install-discover-pod.

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: kcm-cluster-init-pod
  annotations:
    "scheduler.alpha.kubernetes.io/tolerations": '[{"key":"kcm", "value":"true"}]'
    "networks": '[{ "name": "flannel-conf" }]'
  name: kcm-cluster-init-pod
spec:
@rkamudhan
Copy link
Member

@lmdaly Thanks for finding the bug or missed feature in the Multus CNI for the Daemonset used in the CMK. I worked on this bug, will upstream the bug fix today.

@rkamudhan
Copy link
Member

Bug fix is merged, closing the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants