-
Configure
isulad
Configure the
pod-sandbox-image
in/etc/isulad/daemon.json
:"pod-sandbox-image": "my-pause:1.0.0"
Configure the
endpoint
ofisulad
:"hosts": [ "unix:///var/run/isulad.sock" ]
if
hosts
is not configured, the default endpoint isunix:///var/run/isulad.sock
.iSulad
supports bothCRI V1alpha2
andCRI V1
, and usesCRI V1alph2
by default. IfCRI V1
is required, it can be configured in/etc/isulad/daemon.json
to enableCRI V1
:"enable-cri-v1": true,
If
iSulad
is compiled from source codes,-D ENABLE_CRI_API_V1=ON
option is required in cmake. -
Restart
isulad
:$ sudo systemctl restart isulad
-
Start
kubelet
based on the configuration or default value:$ /usr/bin/kubelet --container-runtime-endpoint=unix:///var/run/isulad.sock --image-service-endpoint=unix:///var/run/isulad.sock --pod-infra-container-image=my-pause:1.0.0 --container-runtime=remote ...
RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod's containers. For more information, please refer to runtime-class. Currently isulad
only supports kata-containers
and runc
.
-
Configure
isulad
in/etc/isulad/daemon.json
:"runtimes": { "kata-runtime": { "path": "/usr/bin/kata-runtime", "runtime-args": [ "--kata-config", "/usr/share/defaults/kata-containers/configuration.toml" ] } }
-
Extra configuration
iSulad
supports theoverlay2
anddevicemapper
as storage drivers. The default value isoverlay2
.In some scenarios, using block device type as storage drivers is a better choice, such as run a
kata-containers
. The procedure for configuring thedevicemapper
is as follows:First, create ThinPool:
$ sudo pvcreate /dev/sdb1 # /dev/sdb1 for example $ sudo vgcreate isulad /dev/sdb $ sudo echo y | lvcreate --wipesignatures y -n thinpool isulad -L 200G $ sudo echo y | lvcreate --wipesignatures y -n thinpoolmeta isulad -L 20G $ sudo lvconvert -y --zero n -c 512K --thinpool isulad/thinpool --poolmetadata isulad/thinpoolmeta $ sudo lvchange --metadataprofile isulad-thinpool isulad/thinpool
Then,add configuration for
devicemapper
in/etc/isulad/daemon.json
:"storage-driver": "devicemapper" "storage-opts": [ "dm.thinpooldev=/dev/mapper/isulad-thinpool", "dm.fs=ext4", "dm.min_free_space=10%" ]
-
Restart
isulad
:$ sudo systemctl restart isulad
-
Create
kata-runtime.yaml
. For example:apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata: name: kata-runtime handler: kata-runtime
Execute
kubectl apply -f kata-runtime.yaml
-
Create pod spec
kata-pod.yaml
. For example:apiVersion: v1 kind: Pod metadata: name: kata-pod-example spec: runtimeClassName: kata-runtime containers: - name: kata-pod image: busybox:latest command: ["/bin/sh"] args: ["-c", "sleep 1000"]
-
Run pod:
$ kubectl create -f kata-pod.yaml $ kubectl get pod NAME READY STATUS RESTARTS AGE kata-pod-example 1/1 Running 4 2s
iSulad realize the CRI interface to connect to the CNI network, parse the CNI network configuration files, join or exit CNI network. In this section, we call CRI interface to start pod to verify the CNI network configuration for simplicity.
-
Configure
isulad
in/etc/isulad/daemon.json
:"network-plugin": "cni", "cni-bin-dir": "/opt/cni/bin", "cni-conf-dir": "/etc/cni/net.d",
-
Prepare CNI network plugins:
Compile and genetate the CNI plugin binaries, and copy binaries to the directory
/opt/cni/bin
.$ git clone https://github.com/containernetworking/plugins.git $ cd plugins && ./build_linux.sh $ cd ./bin && ls bandwidth bridge dhcp firewall flannel ...
-
Prepare CNI network configuration:
The conf file suffix can be
.conflist
or.conf
, the difference is whether it contains multiple plugins. For example, we create10-mynet.conflist
file under directory/etc/cni/net.d/
, the content is as follows:{ "cniVersion": "0.3.1", "name": "default", "plugins": [ { "name": "default", "type": "ptp", "ipMasq": true, "ipam": { "type": "host-local", "subnet": "10.1.0.0/16", "routes": [ { "dst": "0.0.0.0/0" } ] } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }
-
Configure sandbox-config.json :
{ "port_mappings":[{"protocol": 1, "container_port": 80, "host_port": 8080}], "metadata": { "name": "test", "namespace": "default", "attempt": 1, "uid": "hdishd83djaidwnduwk28bcsb" }, "labels": { "filter_label_key": "filter_label_val" }, "linux": { } }
-
Restart
isulad
and start Pod:$ sudo systemctl restart isulad $ sudo crictl -i unix:///var/run/isulad.sock -r unix:///var/run/isulad.sock runp sandbox-config.json
-
View pod network informations:
$ sudo crictl -i unix:///var/run/isulad.sock -r unix:///var/run/isulad.sock inspectp <pod-id>