Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle scheduled events immediately in IMDS mode, the same as queue processor mode #661

Merged
merged 2 commits into from
Jul 21, 2022

Conversation

snay2
Copy link
Contributor

@snay2 snay2 commented Jul 19, 2022

Issue #, if available: #361

Description of changes:
Update the IMDS mode handling of scheduled events to start immediately, the same as how queue processor mode currently works:

We could probably remove the date parsing altogether and get rid of EndTime (lines 90-101), since the SQS version does not set it at all and it does not appear to be used. But in the interest of making a small change, I'm leaving that code for now.

Testing:

Inside the NTH repo folder on my local machine, I ran the following commands:

# Create the kind cluster with two worker nodes
kind create cluster --config test/k8s-local-cluster-test/kind-three-node-cluster.yaml

# Install AEMM with a scheduled event starting a few hours in the future
helm template amazon-ec2-metadata-mock eks/amazon-ec2-metadata-mock \
  --set aemm.events.code="instance-reboot" \
  --set aemm.events.notAfter="2022-07-19T23:00:00Z" \
  --set aemm.events.notBefore="2022-07-19T22:25:00Z" \
  --set aemm.events.notBeforeDeadline="2022-07-19T22:30:00Z" \
  --set aemm.events.state="active" \
| kubectl apply -f -

# Verify that AEMM is serving the scheduled event with the correct times per our configuration
kubectl port-forward pod/<AEMM_pod_name> 1338:1338
open http://localhost:1338/latest/meta-data/events/maintenance/scheduled

# Install the latest release of NTH, instruct it to use AEMM, and have it listen for scheduled events but not Spot ITNs
helm template aws-node-termination-handler eks/aws-node-termination-handler \
  --namespace kube-system \
  --set instanceMetadataURL="http://amazon-ec2-metadata-mock-service.default.svc.cluster.local:1338" \
  --set enableSpotInterruptionDraining="false" \
  --set enableScheduledEventDraining="true" \
| kubectl apply -f -

# Verify that NTH received the scheduled event, but that it's in the future
kubectl logs <NTH_pod_name> --namespace kube-system

# Wait a few minutes and verify that no pods have been cordoned or drained
kubectl get nodes

# Build NTH with the changes in this PR
make build-binaries

# Find the repo name and tag of the image just built
docker images

# Import that image into the kind cluster
kind load docker-image bin-build:v1.16.4-10-g8297c93-linux-amd64

# Install the local build of NTH with the same options as before
helm template aws-node-termination-handler ./config/helm/aws-node-termination-handler \
  --namespace kube-system \
  --set image.repository="bin-build" \
  --set image.tag="v1.16.4-10-g8297c93-linux-amd64" \
  --set instanceMetadataURL="http://amazon-ec2-metadata-mock-service.default.svc.cluster.local:1338" \
  --set enableSpotInterruptionDraining="false" \
  --set enableScheduledEventDraining="true" \
| kubectl apply -f -

# Verify that NTH received the scheduled event, and that its StartTime is now
kubectl logs <NTH_pod_name> --namespace kube-system

# Wait a few minutes and verify that the worker nodes get cordoned and drained
# This will show SchedulingDisabled, and k9s will show pods getting restarted
kubectl get nodes

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@snay2 snay2 requested a review from a team as a code owner July 19, 2022 15:21
@snay2
Copy link
Contributor Author

snay2 commented Jul 19, 2022

Oops, forgot to update the unit tests. Stand by...

@pdk27
Copy link
Contributor

pdk27 commented Jul 19, 2022

LGTM 👍

Copy link
Contributor

@pdk27 pdk27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants