-
Notifications
You must be signed in to change notification settings - Fork 24
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix: add generated target for all node IPs (#1119)
## Description Adds a new generator / target called `KubeNodes` that contains the internal IP addresses of nodes in the cluster. **NOTE:** ~I have no idea (yet) wher the `docs/reference/` file changes came from.~ They appear to be missing on `main`. ## Related Issue Relates to #970 . `Steps to Validate` include steps to verify 970 gets fixed. ## Type of change - [x] Bug fix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Other (security config, docs update, etc) ## Steps to Validate <details> ### Setup and verify behavior of the target Create a k3d cluster named `uds` (we use names later for adding nodes): ```bash k3d cluster create uds ``` Deploy slim-dev: ```bash uds run slim-dev ``` Create and deploy monitoring layer: ```bash uds run -f ./tasks/create.yaml single-layer-callable --set LAYER=monitoring uds run -f ./tasks/deploy.yaml single-layer-callable --set LAYER=monitoring ``` Create and deploy metrics-server layer: ```bash uds run -f ./tasks/create.yaml single-layer-callable --set LAYER=metrics-server uds run -f ./tasks/deploy.yaml single-layer-callable --set LAYER=metrics-server ``` Inspect the network policy for scraping of kube nodes: ```bash kubectl describe networkpolicy allow-prometheus-stack-egress-metrics-scraping-of-kube-nodes -n monitoring ``` The `spec:` part is the relevant part, and should contain the IPs of the nodes: ```bash Spec: PodSelector: app.kubernetes.io/name=prometheus Not affecting ingress traffic Allowing egress traffic: To Port: <any> (traffic allowed to all ports) To: IPBlock: CIDR: 172.28.0.2/32 Except: Policy Types: Egress ``` Add a node: ```bash k3d node create extra1 --cluster uds --wait --memory 500M ``` Verify the internal IP of the new node: ```bash kubectl get nodes -o custom-columns="NAME:.metadata.name,INTERNAL-IP:.status.addresses[?(@.type=='InternalIP')].address" ``` Re-get the netpol to verify the new ip is in the `spec:` block: ```bash kubectl describe networkpolicy allow-prometheus-stack-egress-metrics-scraping-of-kube-nodes -n monitorin ``` Should now be something like this: ```bash Spec: PodSelector: app.kubernetes.io/name=prometheus Not affecting ingress traffic Allowing egress traffic: To Port: <any> (traffic allowed to all ports) To: IPBlock: CIDR: 172.28.0.2/32 Except: To: IPBlock: CIDR: 172.28.0.4/32 Except: Policy Types: Egress ``` ### Verify Prometheus can read things Connect directly to prometheus: ```bash kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090 ``` Visit http://localhost:9090/ Execute this expression to see all node/cpu data: ```bash node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate ``` To see just info from the `extra1` node: ```bash node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate{node=~"^k3d-extra.*"} ``` Add a new node: ```bash k3d node create extra2 --cluster uds --wait --memory 500M ``` Verify the netpol updates: ```bash kubectl describe networkpolicy allow-prometheus-stack-egress-metrics-scraping-of-kube-nodes -n monitorin ``` Re-execute the Prometheus query from above. It make take a few minutes for `extra2` to show up though. Not sure why. Delete a node and verify the spec updates again: ```bash kubectl delete node k3d-extra1-0 && k3d node delete k3d-extra1-0 ``` Re-reading the netpol should should the removal of that IP </details> ## Checklist before merging - [x] Test, docs, adr added or updated as needed - [x] [Contributor Guide](https://github.com/defenseunicorns/uds-template-capability/blob/main/CONTRIBUTING.md) followed --------- Signed-off-by: catsby <clint@defenseunicorns.com> Co-authored-by: Micah Nagel <micah.nagel@defenseunicorns.com>
- Loading branch information
Showing
12 changed files
with
465 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
218 changes: 218 additions & 0 deletions
218
src/pepr/operator/controllers/network/generators/kubeNodes.spec.ts
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,218 @@ | ||
/** | ||
* Copyright 2024 Defense Unicorns | ||
* SPDX-License-Identifier: AGPL-3.0-or-later OR LicenseRef-Defense-Unicorns-Commercial | ||
*/ | ||
|
||
import { beforeEach, beforeAll, describe, expect, it, jest } from "@jest/globals"; | ||
|
||
import { | ||
initAllNodesTarget, | ||
kubeNodes, | ||
updateKubeNodesFromCreateUpdate, | ||
updateKubeNodesFromDelete, | ||
} from "./kubeNodes"; | ||
import { K8s, kind } from "pepr"; | ||
import { V1NetworkPolicyList } from "@kubernetes/client-node"; | ||
import { anywhere } from "./anywhere"; | ||
|
||
type KubernetesList<T> = { | ||
items: T[]; | ||
}; | ||
|
||
jest.mock("pepr", () => { | ||
const originalModule = jest.requireActual("pepr") as object; | ||
return { | ||
...originalModule, | ||
K8s: jest.fn(), | ||
kind: { | ||
Node: "Node", | ||
NetworkPolicy: "NetworkPolicy", | ||
}, | ||
}; | ||
}); | ||
|
||
describe("kubeNodes module", () => { | ||
const mockNodeList = { | ||
items: [ | ||
{ | ||
metadata: { name: "node1" }, | ||
status: { | ||
addresses: [{ type: "InternalIP", address: "10.0.0.1" }], | ||
conditions: [{ type: "Ready", status: "True" }], | ||
}, | ||
}, | ||
{ | ||
metadata: { name: "node2" }, | ||
status: { | ||
addresses: [{ type: "InternalIP", address: "10.0.0.2" }], | ||
conditions: [{ type: "Ready", status: "True" }], | ||
}, | ||
}, | ||
], | ||
}; | ||
|
||
const mockNetworkPolicyList: V1NetworkPolicyList = { | ||
apiVersion: "networking.k8s.io/v1", | ||
kind: "NetworkPolicyList", | ||
items: [ | ||
{ | ||
apiVersion: "networking.k8s.io/v1", | ||
kind: "NetworkPolicy", | ||
metadata: { | ||
name: "example-policy", | ||
namespace: "default", | ||
}, | ||
spec: { | ||
podSelector: {}, // required field | ||
policyTypes: ["Egress"], // or ["Ingress"], or both | ||
egress: [ | ||
{ | ||
to: [{ ipBlock: { cidr: "0.0.0.0/0" } }], // an IP we don't want | ||
}, | ||
], | ||
}, | ||
}, | ||
], | ||
}; | ||
|
||
const mockK8sGetNodes = jest.fn<() => Promise<KubernetesList<kind.Node>>>(); | ||
const mockGetNetworkPolicies = jest.fn<() => Promise<KubernetesList<kind.NetworkPolicy>>>(); | ||
const mockApply = jest.fn(); | ||
|
||
beforeAll(() => { | ||
(K8s as jest.Mock).mockImplementation(() => ({ | ||
Get: mockK8sGetNodes, | ||
WithLabel: jest.fn(() => ({ | ||
Get: mockGetNetworkPolicies, | ||
})), | ||
Apply: mockApply, | ||
})); | ||
}); | ||
|
||
beforeEach(() => { | ||
jest.clearAllMocks(); | ||
}); | ||
|
||
describe("initAllNodesTarget", () => { | ||
it("should initialize nodeSet with internal IPs from nodes", async () => { | ||
mockK8sGetNodes.mockResolvedValue(mockNodeList); | ||
await initAllNodesTarget(); | ||
const cidrs = kubeNodes(); | ||
// Should have two IPs from mockNodeList | ||
expect(cidrs).toHaveLength(2); | ||
expect(cidrs).toEqual( | ||
expect.arrayContaining([ | ||
{ ipBlock: { cidr: "10.0.0.1/32" } }, | ||
{ ipBlock: { cidr: "10.0.0.2/32" } }, | ||
]), | ||
); | ||
}); | ||
}); | ||
|
||
describe("nodeCIDRs", () => { | ||
it("should return anywhere if no nodes known", async () => { | ||
mockK8sGetNodes.mockResolvedValue({ items: [] }); | ||
await initAllNodesTarget(); | ||
const cidrs = kubeNodes(); | ||
// expect it to match "anywhere" | ||
expect(cidrs).toEqual([anywhere]); | ||
}); | ||
}); | ||
|
||
describe("updateKubeNodesFromCreateUpdate", () => { | ||
it("should add a node IP if node is ready", async () => { | ||
mockK8sGetNodes.mockResolvedValueOnce({ items: [] }); | ||
mockGetNetworkPolicies.mockResolvedValue(mockNetworkPolicyList); | ||
await initAllNodesTarget(); // start empty | ||
await updateKubeNodesFromCreateUpdate(mockNodeList.items[0]); | ||
let cidrs = kubeNodes(); | ||
expect(cidrs).toHaveLength(1); | ||
expect(cidrs[0].ipBlock?.cidr).toBe("10.0.0.1/32"); | ||
expect(mockApply).toHaveBeenCalled(); | ||
|
||
await updateKubeNodesFromCreateUpdate(mockNodeList.items[1]); | ||
cidrs = kubeNodes(); | ||
expect(cidrs).toHaveLength(2); | ||
expect(cidrs[1].ipBlock?.cidr).toBe("10.0.0.2/32"); | ||
expect(mockApply).toHaveBeenCalled(); | ||
}); | ||
|
||
it("should not remove a node that's no longer ready", async () => { | ||
mockK8sGetNodes.mockResolvedValue(mockNodeList); | ||
await initAllNodesTarget(); | ||
let cidrs = kubeNodes(); | ||
// Should have two IPs from mockNodeList | ||
expect(cidrs).toHaveLength(2); | ||
expect(cidrs).toEqual( | ||
expect.arrayContaining([ | ||
{ ipBlock: { cidr: "10.0.0.1/32" } }, | ||
{ ipBlock: { cidr: "10.0.0.2/32" } }, | ||
]), | ||
); | ||
|
||
const notReadyNode = { | ||
metadata: { name: "node2" }, | ||
status: { | ||
addresses: [{ type: "InternalIP", address: "10.0.0.1" }], | ||
conditions: [{ type: "Ready", status: "False" }], | ||
}, | ||
}; | ||
await updateKubeNodesFromCreateUpdate(notReadyNode); | ||
cidrs = kubeNodes(); | ||
expect(cidrs).toHaveLength(2); | ||
expect(cidrs).toEqual( | ||
expect.arrayContaining([ | ||
{ ipBlock: { cidr: "10.0.0.1/32" } }, | ||
{ ipBlock: { cidr: "10.0.0.2/32" } }, | ||
]), | ||
); | ||
}); | ||
|
||
it("should not apply netpol policy changes if a node is already included", async () => { | ||
// setup 1 node in the set and expect 1 application to a policy | ||
mockK8sGetNodes.mockResolvedValueOnce({ items: [] }); | ||
mockGetNetworkPolicies.mockResolvedValue(mockNetworkPolicyList); | ||
await initAllNodesTarget(); // start empty | ||
// add a node even if it's not ready | ||
const initialNode = { | ||
metadata: { name: "node1" }, | ||
status: { | ||
addresses: [{ type: "InternalIP", address: "10.0.0.9" }], | ||
conditions: [{ type: "Ready", status: "False" }], | ||
}, | ||
}; | ||
await updateKubeNodesFromCreateUpdate(initialNode); | ||
let cidrs = kubeNodes(); | ||
expect(cidrs).toHaveLength(1); | ||
expect(cidrs[0].ipBlock?.cidr).toBe("10.0.0.9/32"); | ||
expect(mockApply).toHaveBeenCalled(); | ||
|
||
// clear out the apply from the setup | ||
mockApply.mockClear(); | ||
// change initialNode to set the status to ready | ||
initialNode.status.conditions[0].status = "True"; | ||
await updateKubeNodesFromCreateUpdate(initialNode); | ||
cidrs = kubeNodes(); | ||
expect(cidrs).toHaveLength(1); | ||
expect(cidrs[0].ipBlock?.cidr).toBe("10.0.0.9/32"); | ||
|
||
// the apply should not have been called | ||
expect(mockApply).not.toHaveBeenCalled(); | ||
}); | ||
}); | ||
|
||
describe("updateKubeNodesFromDelete", () => { | ||
it("should remove the node IP from nodeSet", async () => { | ||
mockK8sGetNodes.mockResolvedValueOnce(mockNodeList); | ||
await initAllNodesTarget(); | ||
const cidrsBeforeDelete = kubeNodes(); | ||
expect(cidrsBeforeDelete).toHaveLength(2); | ||
|
||
await updateKubeNodesFromDelete(mockNodeList.items[0]); | ||
const cidrsAfterDelete = kubeNodes(); | ||
expect(cidrsAfterDelete).toHaveLength(1); | ||
expect(cidrsAfterDelete[0].ipBlock?.cidr).toBe("10.0.0.2/32"); | ||
expect(mockApply).toHaveBeenCalled(); | ||
}); | ||
}); | ||
}); |
Oops, something went wrong.