Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support ARM and AMD for calico #1282

Merged
merged 1 commit into from
Nov 17, 2020
Merged

support ARM and AMD for calico #1282

merged 1 commit into from
Nov 17, 2020

Conversation

jayanthvn
Copy link
Contributor

What type of PR is this?
enhancement

Which issue does this PR fix:
#1218

What does this PR do / Why do we need it:
Update calico-node and calico-typha to docker images to support ARM and AMD instances
Update autoscalar image - https://github.com/kubernetes-sigs/cluster-proportional-autoscaler/releases/tag/1.8.3

If an issue # is not available please add repro steps and logs from IPAMD/CNI showing the issue:

Testing done on this change:
Yes

AMD instance

kubectl describe node | grep Arch
 Architecture:               amd64

kgpsys | grep calico
calico-node-njjl2                                     1/1     Running   0          97m    192.168.7.155   ip-192-168-7-155.us-west-2.compute.internal   <none>           <none>
calico-typha-97c66d8b9-kdrdw                          1/1     Running   0          97m    192.168.7.155   ip-192-168-7-155.us-west-2.compute.internal   <none>           <none>
calico-typha-horizontal-autoscaler-6df548d5d5-5s9pp   1/1     Running   0          97m    192.168.29.9    ip-192-168-7-155.us-west-2.compute.internal   <none>           <none>

kubectl describe pod calico-node-njjl2 -n kube-system | grep Image
    Image:          calico/node:v3.15.1
    Image ID:       docker-pullable://calico/node@sha256:b386769a293d180cb6ee208c8594030128a0810b286a93ae897a231ef247afa8

kubectl describe pod calico-typha-97c66d8b9-kdrdw -n kube-system | grep Image
    Image:          calico/typha:v3.15.1
    Image ID:       docker-pullable://calico/typha@sha256:50830f75be50bb5a835c8705bce8745513374ce1cf1714af9b9321412f9b516f

kubectl describe pod calico-typha-horizontal-autoscaler-6df548d5d5-5s9pp -n kube-system | grep Image
    Image:         k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3
    Image ID:      docker-pullable://k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7

ARM instance

kubectl describe node | grep Arch
 Architecture:               arm64

kgpsys | grep calico
calico-node-26qkn                                     1/1     Running   0          6h9m   192.168.46.224   ip-192-168-46-224.us-west-2.compute.internal   <none>           <none>
calico-typha-6fb6797dc7-mpjtp                         1/1     Running   0          6h9m   192.168.46.224   ip-192-168-46-224.us-west-2.compute.internal   <none>           <none>
calico-typha-horizontal-autoscaler-6f6bd5b5df-ffhk5   1/1     Running   0          6h9m   192.168.42.169   ip-192-168-46-224.us-west-2.compute.internal   <none>           <none>

kubectl describe pod calico-node-26qkn -n kube-system | grep Image
    Image:          calico/node:v3.15.1
    Image ID:       docker-pullable://calico/node@sha256:b386769a293d180cb6ee208c8594030128a0810b286a93ae897a231ef247afa8

 kubectl describe pod calico-typha-6fb6797dc7-mpjtp -n kube-system | grep Image
    Image:          calico/typha:v3.15.1
    Image ID:       docker-pullable://calico/typha@sha256:50830f75be50bb5a835c8705bce8745513374ce1cf1714af9b9321412f9b516f

kubectl describe pod calico-typha-horizontal-autoscaler-6f6bd5b5df-ffhk5 -n kube-system | grep Image
    Image:         k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3
    Image ID:      docker-pullable://k8s.gcr.io/cpa/cluster-proportional-autoscaler@sha256:67640771ad9fc56f109d5b01e020f0c858e7c890bb0eb15ba0ebd325df3285e7

Automation added to e2e:

No
Will this break upgrades or downgrades. Has updating a running cluster been tested?:
No

Does this change require updates to the CNI daemonset config files to work?:

No

Does this PR introduce any user-facing change?:

Arm support for calico

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@jayanthvn
Copy link
Contributor Author

/cc @caseydavenport can you please take a look?

@jayanthvn jayanthvn requested review from haouc and achevuru November 6, 2020 00:44
@@ -32,7 +32,7 @@ spec:
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v3.16.2
image: calico/node:v3.16.2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You will probably want to explicitly call out the registry here. e.g., docker.io/calico/node:v3.16.2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes will do that

Copy link

@gfvirga gfvirga Apr 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome! I spent an hour trying to figure why it didn't work for graviton, then I saw dockerhub had both arm64 and x86_64 architecture listed for the same tag and quay didn't decided to try. I have a cluster with both Archs and wanted the daemonset to run on all nodes

I got it working with helm upgrade --install --set calico.node.image="calico/node" --namespace kube-system aws-calico eks/aws-calico.

I am going to put a PR for the helm charts!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just ran into this exact same issue. :(

@@ -707,7 +707,7 @@ spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1
- image: k8s.gcr.io/cpa/cluster-proportional-autoscaler:1.8.3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tried this version myself, but it should probably be fine.

@caseydavenport
Copy link
Contributor

A couple minor comments from me but this LGTM.

@jayanthvn
Copy link
Contributor Author

A couple minor comments from me but this LGTM.

Thanks for your time on reviewing the PR, I have made the change you suggested. Will merge the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants