Skip to content

Performance Testing Guide for Kubearmor

Prashant Mishra edited this page Nov 6, 2023 · 6 revisions

Setting up the environment

  1. A Kuberenetes Cluster with sock-shop deployment installed. (Note, we'll be using a custom sock-shop deployment which has two replicas of the frontend pod)

  2. Apache benchmark from the httpd docker which must be deployed to the cluster.

    We generally use an AKS cluster with two DS2_V2 (7 GB RAM and 2 vCPU )

  3. Change replicas:2 for the front-end deployment in the sock-shop demo, as given:

image

  1. Apply labels to both the nodes, for ex: kubectl label nodes <your-node-name> nodetype=node1

  2. Use this yaml file to deploy httpd to the cluster:

NOTE: Make sure it is applied to the node where the frontend pods are NOT running, we need this to be on a node different from the frontend svc

apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    env: prod
spec:
  containers:
  - name: httpd
    image: httpd
    imagePullPolicy: IfNotPresent
  nodeSelector:
    nodetype: node1

Running the benchmarks

This is the table we need:

Scenario  Requests Concurrent Requests Kubearmor CPU (m) Kubearmor Memory (Mi) Throughput (req/s) Average time per req. (ms) # Failed requests Micro-service CPU (m) Micro-service Memory (Mi)
no kubearmor 50000 5000 - - 2205.502 0.4534 0 401.1 287.3333333

AND atleast 10 data for it, including the average of all of them.

First, get the service IP of the frontend pod using kubectl get svc command.

I have made two scripts, which kind of semi automates the process.

  • ApacheBench.sh script which starts the benchmark and outputs only the important parts (does not save to csv file yet):
#!/bin/bash

apache() {
# Define the Kubernetes pod and container information
pod_name="httpd"

kubectl exec -it "$pod_name" -- bash -c "ab -r -c 5000 -n 50000 {K8s Service IP}" > ab_output.txt | tee ab_output.txt

failed_requests=$(grep "Failed requests" ab_output.txt | awk '{print $3}')
requests_per_second=$(grep "Requests per second" ab_output.txt | awk '{print $4}')
time_per_request=$(grep "Time per request" ab_output.txt | awk 'NR==2{print $4}')

echo "Requests per second: $requests_per_second"
echo "Time per request: $time_per_request"
echo "Failed requests: $failed_requests"
}

apache
  • While this is running, concurrently run this script to get the average resource usage of both the front-end pods on the node:
#!/bin/bash

output_file="mic.csv"

get_pod_stats() {
  pod_name="$1"
  data=$(kubectl top pod -n sock-shop "$pod_name" | tail -n 1 | tr -s " " | cut -d " " --output-delimiter "," -f2,3)
  echo "$pod_name,$data"
}

#Unused for now
get_highest_cpu_row() {
  sort -t, -k1 -n -r "$output_file" | head -n 1
}

total_cpu=0
total_memory=0
count=0

# Continuously update and display the CSV file with live data
microservices_metrics() {
while true; do
  data1=$(get_pod_stats "front-end-pod-1")
  data2=$(get_pod_stats "front-end-pod-2")

  cpu1=$(echo "$data1" | cut -d ',' -f2 | sed 's/m//') 
  memory1=$(echo "$data1" | cut -d ',' -f3 | sed 's/Mi//') 
  cpu2=$(echo "$data2" | cut -d ',' -f2 | sed 's/m//')
  memory2=$(echo "$data2" | cut -d ',' -f3 | sed 's/Mi//')

  # Calculate the average CPU and memory usage
  average_cpu=$((($cpu1 + $cpu2) / 2))
  average_memory=$((($memory1 + $memory2) / 2))

  echo "$average_cpu,$average_memory" >> "$output_file"

  sleep 1
done
}

microservices_metrics

You'll have to keep a watch on this to see when the usage spikes at the highest, till the benchmark is complete. Also

replace front-end-pod-1 and front-end-pod-2 accordingly.

From these two scripts, you'll get all the data EXCEPT Kubearmor usage data to fill in the table mentioned above. As for checking Kubearmor data, you'll have to run this bash command concurrently as well:

watch -n 1 -d  'echo "`date +%H:%M:%S`,`kubectl top pods -n kubearmor --sort-by=memory -l kubearmor-app=kubearmor | tail -n 1 | tr -s " " | cut -d " " --output-delimiter "," -f2,3`" | tee -a perf.csv'```

Clone this wiki locally