K8s-coredump-detector is an open source tool for managing core dump feature in kubernetes. It enables stable, controllable core dump feature when jobs inside k8s containers were crashed. This feature mainly generate, store, distribute core files which generated by the apps inside pods, it could ease investigation of crashed application in multi-tenancy k8s environment. And tenant could download this core files like they are using any other original k8s functions.
As we know, bugs are inevitable in software development, some bugs could be solved by investigating app logs, but some serious bugs like deep null point exception is very hard to debug without core files.However, Kubernetes has no mechanism to manage core files when job inside pods crashed.
This feature mainly collects, stores, distributes core files which generated by the apps inside pods. It supports any filesystem storage as backend storage, and it embeds the K8s' own authorization mechanism like RBAC to control tenant authorities. Tenants could download those core files they want like they are using any other original K8s resources.
The bakcend storage is an independent filesystem storage to store core files. This storage could be either ceph filesystem storage, nfs storage or any other filesystem storage. For test purpose, you can also use local host[https://kubernetes.io/docs/concepts/storage/volumes/#hostpath] storage to store core files.
To control core files generation behavior, a DaemonSet is necessary which would launch each node an Admin-Pod. Those pods make sure everything will go as we expected when job crashed in other work pods. Each Pod will execute following steps:
a. Copy a exectuable file called "kcdt" to host, it is a coredump handler and will be invoked when any process crashed in that host. It will distinguish if the crashed job was from a Pod. If so, it collects related information and stores the core files to backend storage.
b. Modify && maintain core_pattern
settings on each node which usually exist as a file located in /proc/sys/kernel/core_pattern
. This setting is for control the behavior when process crashed. In our case, the Admin-Pod will modify it so the handler will be invoked.
For detail information, please see https://github.com/fenggw-fnst/coredump-node-detector
This component is the bridge between backend storge and users. In nature it is a aggregation api layer which is an offical mechanism to implement users’ own business logical. It contains service, self-defined api-server running in Pod and etcd storage. It will register APIs and Objects to k8s cluster. Users should download core files by those APIs and Objects.
Group | Version | Kind | Subresource |
---|---|---|---|
coredump.fujitsu.com | v1alpha1 | CoredumpEndpoint | dump |
Before downloading core files, there must be a coredumpendpoint associated to the Pod where core dump occured. The coredumpendpoint can be automatically created by setting label coredumpendpoint: auto
to the Pod, also can be manually created via yaml file (See coredumpendpoint_template.yaml).
For a typical case, user wants to download all core files generated by the container test-container
of a Pod through the associated coredumpendpoint test-coredumpendpoint
in namespace test-namespace
, user should do like this:
kubectl get --raw=/apis/coredump.fujitsu.com/v1alpha1/namespaces/test-namespace/coredumpendpoints/test-coredumpendpoint/dump?container=test-container>coredump.tar.gz
tar -zxvf coredump.tar.gz -C ./coredump-files
If everyting works as expected, user could observe all core files in coredump-files.
The core file generation part generates core files when job inside containers crashed. In each node that supports core dump feature, a Admin-Pod will be deployed by DaemonSet to handle this job. Each Admin-Pod inject an executable file called handler into node, it also modifies[core-pattern] to let kernel call that handler to store core files into backend storage.
An aggregation api layer will register a self-defined API called coredump.fujitsu.com
. This api is a bridge between backend storage
and users. User could download core files by this api. Admin can control users' access to core files by native way like RBAC, ABAC.
The core_pattern
would be modified to let our components handle core dump events.
The k8s cluster must boot with allow-privileged
option enabled.
Please See deploy.md
TBD
After deploying all the components successfully, you could test the function is working by test script
This section gives examples of how to download coredump files. The coredumpendpoint_template.yaml file under test floder will be used.
Suppose we want to download core files dumped by a container called test-container
in Pod default/test-pod
with the Pod uid is 1234-5678
, then we need two steps:
Firstly, make sure there is a coredumpendpoint associated with the Pod, suppose it is called test-coredumpendpoint
.
If it has already been automatically created, then we just goto step two, otherwise we need create it manually:
cat test/coredumpendpoint_template.yaml | sed "s/__NAMESPACE__/default/g" | sed "s/__CNAME__/test-coredumpendpoint/g" | sed "s/__PNAME__/test-pod/g" | sed "s/__UID__/1234-5678/g" | kubectl create -f -
Note: If the Pod still exists, then it is not necessary to provide poduid
, it can be retrieved by podname
when creating.
Secondly, download the core files:
kubectl get --raw=/apis/coredump.fujitsu.com/v1alpha1/namespaces/default/coredumpendpoints/test-coredumpendpoint/dump?container=test-container>coredump.tar.gz