You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our AWS EKS cluster, we deployed Karpenter to manage autoscaling. This resulted in the creation of five nodes, a mix of spot and on-demand instances.
When deploying Datadog DaemonSets, we encountered errors indicating insufficient RAM and CPU on the existing nodes.
My expectation was that Karpenter would automatically terminate lighter nodes and provision new ones to accommodate the DaemonSets.
Is my understanding of Karpenter incorrect, or is there a potential bug with how the DaemonSets are handled?
To work around this, I ran the Helm deployment and, while it was in a waiting state, manually deleted the node claims so that Karpenter would create new nodes. This approach feels like a hacky solution.
Would appreciate any insights or best practices on handling this scenario more cleanly.
The text was updated successfully, but these errors were encountered:
smartaquarius10
changed the title
Karpenter is not auto creating better machines with daemon sets
Karpenter is not auto creating better machines for daemon sets
Feb 6, 2025
Hello,
In our AWS EKS cluster, we deployed Karpenter to manage autoscaling. This resulted in the creation of five nodes, a mix of spot and on-demand instances.
When deploying Datadog DaemonSets, we encountered errors indicating insufficient RAM and CPU on the existing nodes.
My expectation was that Karpenter would automatically terminate lighter nodes and provision new ones to accommodate the DaemonSets.
Is my understanding of Karpenter incorrect, or is there a potential bug with how the DaemonSets are handled?
To work around this, I ran the Helm deployment and, while it was in a waiting state, manually deleted the node claims so that Karpenter would create new nodes. This approach feels like a hacky solution.
Would appreciate any insights or best practices on handling this scenario more cleanly.
The text was updated successfully, but these errors were encountered: