Kube Trace NFS is designed to observe NFS connections in a Kubernetes cluster by collecting telemetry data from a node-level eBPF program, utilizing the BCC tool. Inspired by the nfsslower
tool and other BCC utilities, this application focuses specifically on NFS operations such as reads, writes, opens, and getattrs.
Currently, the application collects node-level metrics, with pod-level metrics and the ranking of the most accessed files planned for upcoming versions. Collected data can be exported to monitoring tools like Prometheus and visualized on platforms such as Grafana. This comprehensive data provides valuable insights into how NFS traffic is distributed across the cluster.
Many cloud providers offer storage through NFS protocol, which can be attached to Kubernetes clusters via CSI (Container Storage Interface). However, the monitoring provided by storage providers often aggregates data for all NFS client connections. This aggregation makes it difficult to isolate and monitor specific connections and their operations such as reads, writes, and getattrs to the NFS server. This project addresses this challenge by offering detailed telemetry data of NFS requests from clients to the server, facilitating both node-level and pod-level analysis. Leveraging Prometheus and Grafana, this data enables comprehensive analysis of NFS traffic, empowering users to gain valuable insights into their cluster's NFS interactions.
- eBPF-based efficient and low-overhead monitoring
- Provide byte throughput metrics for read/write operations
- Latency and occurrence rate of read, write, open, and getattr operations
- Potential for metrics related to IOPS and file-level access
K (Kernel): kprobe, eBPF program
U (User space): kube-trace-nfs, nfs-client, other pods
The nfs client
establishes a connection with the nfs server
to furnish storage for pods, a process routed through kernel programs. kube-trace-nfs
attaches eBPF program
to nfs kprobes
to capture metrics concerning events occurring within nfs clients. These metrics are stored in eBPF maps and undergo processing for event analysis. Events involving read, write, open, and getattr operations are forwarded to the user space component kube-trace-nfs
. Subsequently, these values are exported to Prometheus, from where the data can be leveraged in various visualization tools such as Grafana.
Kube Trace NFS can be installed from GHCR using Helm
VERSION=${$(curl -sL https://api.github.com/repos/4rivappa/kube-trace-nfs/releases/latest | jq -r .name)#v}
helm install kube-trace-nfs oci://ghcr.io/4rivappa/kube-trace-nfs --version $VERSION
After Helm install, you can access nfs_read_bytes
, nfs_write_bytes
metrics in prometheus