From 2fdedd819f87da515dfb6e7894c3f725681e781d Mon Sep 17 00:00:00 2001 From: DougTW Date: Wed, 3 Mar 2021 15:10:15 -0800 Subject: [PATCH] docs: a bunch of grammatical and stylistic fixes. Correct a number of grammatical and stylistics problems with the documentation, including - spelling mistakes - incorrect comma usage - incorrect headings case - mistakes in sentence syntax In addition to these, also - add missing TM & B, and - standardize line length to < 78 characters Signed-off-by: DougTW --- README.md | 3 +- docs/contributing.md | 11 ++- docs/installation.md | 35 +++---- docs/introduction.md | 6 +- docs/node-agent.md | 46 ++++----- docs/quick-start.md | 30 +++--- docs/security.md | 3 +- docs/setup.md | 223 +++++++++++++++++++++++-------------------- docs/webhook.md | 39 ++++---- 9 files changed, 214 insertions(+), 182 deletions(-) diff --git a/README.md b/README.md index 05f39434a..e3db31007 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,7 @@ -# CRI Resource Manager for Kubernetes +# CRI Resource Manager for Kubernetes\* Welcome! ### See our [Documentation][documentation] site for detailed documentation. [documentation]: https://intel.github.io/cri-resource-manager - diff --git a/docs/contributing.md b/docs/contributing.md index dbc6e42ac..6fc0cfaf5 100644 --- a/docs/contributing.md +++ b/docs/contributing.md @@ -1,9 +1,10 @@ # Contributing -Please use the GitHub infrastructure for contributing to +Please use the GitHub\* infrastructure for contributing to CRI Resource Manager. Use [pull requests](https://github.com/intel/cri-resource-manager/pulls) -to contribute code, bug fixes, or if you want to discuss your ideas in terms of code. -Open [issues](https://github.com/intel/cri-resource-manager/issues) to report bugs, -request new features, or if you want to discuss any other topics related to CRI Resource -Manager or orchestration resource management in general. +to contribute code, bug fixes, or if you want to discuss your ideas in terms of +code. Open [issues](https://github.com/intel/cri-resource-manager/issues) to +report bugs, request new features, or if you want to discuss any other topics +related to CRI Resource Manager or orchestration resource management in +general. diff --git a/docs/installation.md b/docs/installation.md index ec232a8dd..0d198cb1b 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -1,16 +1,17 @@ # Installation -## Installing From Packages +## Installing from packages You can install CRI Resource Manager from `deb` or `rpm` packages for supported distros. - - [download](https://github.com/intel/cri-resource-manager/releases/latest) packages + - [download](https://github.com/intel/cri-resource-manager/releases/latest) + packages - install them: - for rpm packages: `sudo rpm -Uvh ` - for deb packages: `sudo dpkg -i ` -## Installing From Sources +## Installing from sources Although not recommended, you can install CRI Resource Manager from sources: @@ -21,7 +22,7 @@ You will need at least `git`, `golang 1.14` or newer, `GNU make`, `bash`, `find`, `sed`, `head`, `date`, and `install` to be able to build and install from sources. -## Building Packages for the Distro of Your Host +## Building packages for the distro of your host You can build packages for the `$distro` of your host by executing the following command: @@ -33,7 +34,7 @@ make packages If the `$version` of your `$distro` is supported, this will leave the resulting packages in `packages/$distro-$version`. Building packages this way requires `docker`, but it does not require you to install -the full set of build dependecies of CRI Resource Manager to your host. +the full set of build dependencies of CRI Resource Manager to your host. If you want to build packages without docker, you can use either `make rpm` or `make deb`, depending on which supported distro you are @@ -49,34 +50,34 @@ ls dockerfiles/cross-build If you see a `Dockerfile.$distro-$version` matching your host then your distro is supported. -## Building Packages for Another Distro +## Building packages for another distro You can cross-build packages of the native `$type` for a particular -`$version` of a `$distro` by running the follwing command: +`$version` of a `$distro` by running the following command: ``` make cross-$type.$distro-$version ``` -Similarly to `make packages` this will build packages using a `docker` +Similarly to `make packages`, this will build packages using a `Docker\*` container. However, instead of building for your host, it will build them for the specified distro. For instance `make cross-deb.ubuntu-18.04` will -build `deb` packages for `Ubuntu 18.04`, and `make cross-rpm.centos-8` will -build `rpm` packages for `CentOS 8` +build `deb` packages for `Ubuntu\* 18.04` and `make cross-rpm.centos-8` will +build `rpm` packages for `CentOS\* 8` -## Post-Install Configuration +## Post-install configuration -The provided packages install `systemd` service files and sample configuration. -The easiest way to get up and running is to rename the sample configuration and -start CRI Resource Manager using systemd. You can do this using the following -commands: +The provided packages install `systemd` service files and a sample +configuration. The easiest way to get up and running is to rename the sample +configuration and start CRI Resource Manager using systemd. You can do this +using the following commands: ``` mv /etc/cri-resmgr/fallback.cfg.sample /etc/cri-resmgr/fallback.cfg systemctl start cri-resource-manager ``` -If you want, you can set up automatic starting of CRI Resource Manager +If you want, you can set CRI Resource Manager to automatically start when your system boots with this command: ``` @@ -88,7 +89,7 @@ passed to CRI Resource Manager upon startup. You can change these by editing this file and then restarting CRI Resource Manager, like this: ``` -# On debian-based systems edit the defaults like this: +# On Debian\*-based systems edit the defaults like this: ${EDITOR:-vi} /etc/default/cri-resource-manager # On rpm-based systems edit the defaults like this: ${EDITOR:-vi} /etc/sysconfig/cri-resource-manager diff --git a/docs/introduction.md b/docs/introduction.md index 60dc0cfcf..a79e2ed53 100644 --- a/docs/introduction.md +++ b/docs/introduction.md @@ -6,15 +6,15 @@ dockershim+docker), relaying requests and responses back and forth. The main purpose of the proxy is to apply hardware-aware resource allocation policies to the containers running in the system. -Policies are applied by either modifying a request before forwarding it, or +Policies are applied by either modifying a request before forwarding it or by performing extra actions related to the request during its processing and proxying. There are several policies available, each with a different set of goals in mind and implementing different hardware allocation strategies. The details of whether and how a CRI request is altered or if extra actions are -performed depend on which policy is active in CRI Resource Manager, and how +performed depend on which policy is active in CRI Resource Manager and how that policy is configured. The current goal for the CRI Resource Manager is to prototype and experiment -with new Kubernetes container placement policies. The existing policies are +with new Kubernetes\* container placement policies. The existing policies are written with this in mind and the intended setup is for the Resource Manager to only act as a proxy for the Kubernetes Node Agent, kubelet. diff --git a/docs/node-agent.md b/docs/node-agent.md index 39d70c1fc..a13732848 100644 --- a/docs/node-agent.md +++ b/docs/node-agent.md @@ -1,34 +1,36 @@ # Node Agent CRI Resource Manager can be configured dynamically using the CRI Resource -Manager Node Agent and Kubernetes ConfigMaps. The agent can be build using -the [provided Dockerfile](/cmd/cri-resmgr-agent/Dockerfile). It can be deployed -as a `DaemonSet` in the cluster using the [provided deployment file](/cmd/cri-resmgr-agent/agent-deployment.yaml). +Manager Node Agent and Kubernetes\* ConfigMaps. The agent can be build using +the [provided Dockerfile](/cmd/cri-resmgr-agent/Dockerfile). It can be +deployed as a `DaemonSet` in the cluster using the +[provided deployment file](/cmd/cri-resmgr-agent/agent-deployment.yaml). -To run the agent manually or as a `systemd` service, set the environment variable -`NODE_NAME` to the name of the cluster node the agent is running on. If necessary -pass it the credentials for accessing the cluster using the the `-kubeconfig ` -command line option. +To run the agent manually or as a `systemd` service, set the environment +variable `NODE_NAME` to the name of the cluster node the agent is running +on. If necessary pass it the credentials for accessing the cluster using + the `-kubeconfig ` command line option. -The agent monitors two ConfigMaps for the node, a primary node-specific one, and -a secondary group-specific or default one, depending on whether the node belongs -to a configuration group. The node-specific ConfigMap always takes precedence over -the others. +The agent monitors two ConfigMaps for the node, a primary node-specific one +and a secondary group-specific or default one, depending on whether the node +belongs to a configuration group. The node-specific ConfigMap always takes +precedence over the others. The names of these ConfigMaps are 1. `cri-resmgr-config.node.$NODE_NAME`: primary, node-specific configuration -2. `cri-resmgr-config.group.$GROUP_NAME`: secondary group-specific node configuration -3. `cri-resmgr-config.default`: secondary: secondary default node configuration +2. `cri-resmgr-config.group.$GROUP_NAME`: secondary group-specific node + configuration +3. `cri-resmgr-config.default`: secondary: secondary default node + configuration You can assign a node to a configuration group by setting the `cri-resource-manager.intel.com/group` label on the node to the name of -the configuration group. You can remove a node from its group by deleting the node -group label. - -There is a [sample ConfigMap spec](/sample-configs/cri-resmgr-configmap.example.yaml) -that contains anode-specific, a group-specific, and a default ConfigMap examples. -See [any available policy-specific documentation](policy/index.rst) for more information on the -policy configurations. - - +the configuration group. You can remove a node from its group by deleting +the node group label. + +There is a +[sample ConfigMap spec](/sample-configs/cri-resmgr-configmap.example.yaml) +that contains a node-specific, a group-specific, and a default ConfigMap +example. See [any available policy-specific documentation](policy/index.rst) +for more information on the policy configurations. diff --git a/docs/quick-start.md b/docs/quick-start.md index 7eee80b61..b0c61c187 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -1,7 +1,7 @@ # Quick-start -The following describes the Minimal steps to get started with CRI Resource -Manager. +The following describes the minimum number of steps to get started with CRI +Resource Manager. ## Pre-requisites @@ -12,9 +12,9 @@ Manager. First, install and setup cri-resource-manager. -### Install Package +### Install package -#### CentOS, Fedora and SUSE +#### CentOS\*, Fedora\*, and SUSE\* ``` CRIRM_VERSION=`curl -s "https://api.github.com/repos/intel/cri-resource-manager/releases/latest" | \ @@ -23,7 +23,7 @@ source /etc/os-release sudo rpm -Uvh https://github.com/intel/cri-resource-manager/releases/download/v${CRIRM_VERSION}/cri-resource-manager-${CRIRM_VERSION}-0.x86_64.${ID}-${VERSION_ID}.rpm ``` -#### Ubuntu and Debian +#### Ubuntu\* and Debian\* ``` CRIRM_VERSION=`curl -s "https://api.github.com/repos/intel/cri-resource-manager/releases/latest" | \ jq .tag_name | tr -d '"v'` @@ -32,7 +32,7 @@ pkg=cri-resource-manager_${CRIRM_VERSION}_amd64.${ID}-${VERSION_ID}.deb; curl -L ``` -### Setup and Verify +### Setup and verify Create configuration and start cri-resource-manager ``` @@ -50,13 +50,13 @@ systemctl status cri-resource-manager Next, you need to configure kubelet to user cri-resource-manager as it's container runtime endpoint. -### Existing Cluster +### Existing cluster When integrating into an existing cluster you need to change kubelet to use cri-resmgr instead of the existing container runtime (expecting containerd here). -#### CentOS, Fedora and SUSE +#### CentOS, Fedora, and SUSE ``` sudo sed '/KUBELET_EXTRA_ARGS/ s!$! --container-runtime-endpoint=/var/run/cri-resmgr/cri-resmgr.sock!' -i /etc/sysconfig/kubelet sudo systemctl restart kubelet @@ -70,8 +70,9 @@ sudo systemctl restart kubelet ### New Cluster -When in the process of setting up a new cluster you simply point kubelet to use -the cri-resmgr cri sockets on cluster node setup time. E.g. with kubeadm: +When in the process of setting up a new cluster you simply point the kubelet +to use the cri-resmgr cri sockets on cluster node setup time. Here's an +example with kubeadm: ``` kubeadm join --cri-socket /var/run/cri-resmgr/cri-resmgr.sock \ ... @@ -82,8 +83,11 @@ kubeadm join --cri-socket /var/run/cri-resmgr/cri-resmgr.sock \ Congratulations, you now have cri-resource-manager running on your system and policying container resource allocations. Next, you could see: -- [Installation](installation.md) for more installation options and detailed installation instructions +- [Installation](installation.md) for more installation options and + detailed installation instructions - [Setup](setup.md) for details on setup and usage -- [Node Agent](node-agent.md) for seting up cri-resmgr-agent for dynamic configuration and more +- [Node Agent](node-agent.md) for setting up cri-resmgr-agent for dynamic + configuration and more - [Webhook](webhook.md) for setting up our resource-annotating webhook -- [Kata support](setup.md#kata-containers) for setting up CRI-RM with Kata containers +- [Support for Kata Containers\*](setup.md#kata-containers) for setting up + CRI-RM with Kata Containers diff --git a/docs/security.md b/docs/security.md index da1465e23..74e4a096f 100644 --- a/docs/security.md +++ b/docs/security.md @@ -1,3 +1,4 @@ # Reporting a Potential Security Vulnerability -Please visit [intel.com/security](https://intel.com/security) for reporting security issues. +Please visit [intel.com/security](https://intel.com/security) to report +security issues. diff --git a/docs/setup.md b/docs/setup.md index d5baceccb..002f71790 100644 --- a/docs/setup.md +++ b/docs/setup.md @@ -1,8 +1,8 @@ # Setup and Usage If you want to give CRI Resource Manager a try, here is the list of things -you need to do, assuming you already have a Kubernetes cluster up and running, -using either `containerd` or `cri-o` as the runtime. +you need to do, assuming you already have a Kubernetes\* cluster up and +running, using either `containerd` or `cri-o` as the runtime. 0. [Install](installation.md) CRI Resource Manager. 1. Set up kubelet to use CRI Resource Manager as the runtime. @@ -15,9 +15,9 @@ For kubelet you do this by altering its command line options like this: --container-runtime-endpoint=unix:///var/run/cri-resmgr/cri-resmgr.sock ``` -For CRI Resource Manager, you need to provide a configuration file, and also a -socket path if you don't use `containerd` or you run it with a different socket -path. +For CRI Resource Manager, you need to provide a configuration file, and also +a socket path if you don't use `containerd` or you run it with a different +socket path. ``` # for containerd with default socket path @@ -26,27 +26,29 @@ path. cri-resmgr --force-config --runtime-socket unix:///var/run/crio/crio.sock ``` -The choice of policy to use along with any potential parameters specific to that -policy are taken from the configuration file. You can take a look at the -[sample configurations](/sample-configs) for some minimal/trivial examples. For instance, -you can use [sample-configs/topology-aware-policy.cfg](/sample-configs/topology-aware-policy.cfg) -as `` to activate the topology aware policy with memory tiering support. +The choice of policy to use along with any potential parameters specific to +that policy are taken from the configuration file. You can take a look at the +[sample configurations](/sample-configs) for some minimal/trivial examples. +For instance, you can use +[sample-configs/topology-aware-policy.cfg](/sample-configs/topology-aware-policy.cfg) +as `` to activate the topology aware policy with memory +tiering support. -**NOTE**: Currently the available policies are work in progress. +**NOTE**: Currently, the available policies are a work in progress. -### Setting Up kubelet To Use CRI Resource Manager as the Runtime +### Setting up kubelet to use CRI Resource Manager as the runtime -To let CRI Resource Manager act as a proxy between kubelet and the CRI runtime -you need to configure kubelet to connect to CRI Resource Manager instead of -the runtime. You do this by passing extra command line options to kubelet like -this: +To let CRI Resource Manager act as a proxy between kubelet and the CRI +runtime, you need to configure kubelet to connect to CRI Resource Manager +instead of the runtime. You do this by passing extra command line options to +kubelet as shown below: ``` kubelet --container-runtime=remote \ --container-runtime-endpoint=unix:///var/run/cri-resmgr/cri-resmgr.sock ``` -## Setting Up CRI Resource Manager +## Setting up CRI Resource Manager Setting up CRI Resource Manager involves pointing it to your runtime and providing it with a configuration. Pointing to the runtime is done using @@ -64,33 +66,34 @@ latter is a bit more involved but it allows you to - manage policy configuration for your cluster as a single source, and - dynamically update that configuration -### Using a Local Configuration From a File +### Using a local configuration from a file -This is the easiest way to run CRI Resource Manager for development or testing. -You can do it with the following command: +This is the easiest way to run CRI Resource Manager for development or +testing. You can do it with the following command: ``` cri-resmgr --force-config --runtime-socket ``` -When started this way CRI Resource Manager reads its configuration from the +When started this way, CRI Resource Manager reads its configuration from the given file. It does not fetch external configuration from the node agent and also disables the config interface for receiving configuration updates. ### Using CRI Resource Manager Agent and a ConfigMap -This setup requires an extra component, the [CRI Resource Manager Node Agent][agent], +This setup requires an extra component, the +[CRI Resource Manager Node Agent][agent], to monitor and fetch configuration from the ConfigMap and pass it on to CRI -Resource Manager. By default CRI Resource Manager will automatically try to +Resource Manager. By default, CRI Resource Manager automatically tries to use the agent to acquire configuration, unless you override this by forcing a static local configuration using the `--force-config ` option. When using the agent, it is also possible to provide an initial fallback for -configuration using the `--fallback-config `. This file will be -use before the very first configuration is successfully acquired from the +configuration using the `--fallback-config `. This file is +used before the very first configuration is successfully acquired from the agent. Whenever a new configuration is acquired from the agent and successfully -taken into use, this configuration is stored in the cache and will become +taken into use, this configuration is stored in the cache and becomes the default configuration to take into use the next time CRI Resource Manager is restarted (unless that time the --force-config option is used). While CRI Resource Manager is shut down, any cached configuration can be @@ -99,16 +102,16 @@ cleared from the cache using the --reset-config command line option. See the [Node Agent][agent] about how to set up and configure the agent. -### Changing the Active Policy +### Changing the active policy -Currently CRI Resource Manager will disable changing the active policy using -the [agent][agent]. That is, once the active policy is recorded in the cache, any -configuration received through the agent that requests a different policy -will be rejected. This limitation will be removed in a future version of +Currently, CRI Resource Manager disables changing the active policy using +the [agent][agent]. That is, once the active policy is recorded in the cache, +any configuration received through the agent that requests a different policy +is rejected. This limitation will be removed in a future version of CRI Resource Manager. -However, by default CRI Resource Manager will allow changing policies during -its startup phase. If you want to disable this you can pass the command line +However, by default CRI Resource Manager allows you to change policies during +its startup phase. If you want to disable this, you can pass the command line option `--disable-policy-switch` to CRI Resource Manager. If you run CRI Resource Manager with disabled policy switching, you can still @@ -122,42 +125,43 @@ option `--reset-policy`. The whole sequence of switching policies this way is - start cri-resmgr (`systemctl start cri-resource-manager`) -### Container Adjustments +### Container adjustments -When the [agent][agent] is in use, it is also possible to `adjust` container `resource -assignments` externally, using dedicated `Adjustment` `Custom Resources` in -the `adjustments.criresmgr.intel.com` group. You can use the -[provided schema](/pkg/apis/resmgr/v1alpha1/adjustment-schema.yaml) to define -the `Adjustment` resource. Then you can copy and modify the -[sample adjustment CR](/sample-configs/external-adjustment.yaml) as a starting -point to test some overrides. +When the [agent][agent] is in use, it is also possible to `adjust` container +`resource assignments` externally, using dedicated `Adjustment` +`Custom Resources` in the `adjustments.criresmgr.intel.com` group. You can +use the [provided schema](/pkg/apis/resmgr/v1alpha1/adjustment-schema.yaml) +to define the `Adjustment` resource. Then you can copy and modify the +[sample adjustment CR](/sample-configs/external-adjustment.yaml) as a +starting point to test some overrides. -An `Adjustment` consists of a +An `Adjustment` consists of the following: - `scope`: - - the nodes and containers to which the adjustment applies to + - the nodes and containers to which the adjustment applies - adjustment data: - updated native/compute resources (`cpu`/`memory` `requests` and `limits`) - updated `RDT` and/or `Block I/O` class - updated top tier (practically now DRAM) memory limit -All adjustment data is optional. An adjustment can choose to set any or all of -them as necessary. The current handling of adjustment update updates the resource -assignments of containers, marks all existing containers as having pending changes -in all controller domains, then triggers a rebalancing in the active policy. This -will cause all containers to be updated. +All adjustment data is optional. An adjustment can choose to set any or all +of them as necessary. The current handling of adjustment update updates the +resource assignments of containers, marks all existing containers as having +pending changes in all controller domains, and then triggers a rebalancing in +the active policy. This causes all containers to be updated. -The scope defines which containers on what nodes the adjustment applies to. Nodes -are currently matched/picked by name, but a trailing wildcard (`*`) is allowed and -matches all nodes with the given prefix in their names. +The scope defines to which containers on what nodes the adjustment applies. +Nodes are currently matched/picked by name, but a trailing wildcard (`*`) is +allowed and matches all nodes with the given prefix in their names. -Containers are matched by expressions. These are exactly the same as the expressions -for defining [affinity scopes](policy/container-affinity.md). A single adjustment can -specify multipe node/container match pairs. An adjustment will apply to all containers -in its scope. If an adjustment/update results in conflicts for some container, that is -at least one container is in the scope of multiple adjustments, the adjustment is -rejected and the whole update ignored. +Containers are matched by expressions. These are exactly the same as the +expressions for defining [affinity scopes](policy/container-affinity.md). A +single adjustment can specify multiple node/container match pairs. An +adjustment applies to all containers in its scope. If an adjustment/update +results in conflicts for some container, that is at least one container is +in the scope of multiple adjustments, the adjustment is rejected and the +whole update is ignored. -#### Commands for Declaring, Creating, Deleting, and Examining Adjustments +#### Commands for declaring, creating, deleting, and examining adjustments You can declare the custom resource for adjustments with this command: @@ -171,9 +175,9 @@ You can then add adjustments with a command like this: kubectl apply -f sample-configs/external-adjustment.yaml ``` -You can list existing adjustments with the following command. Use the right +You can list existing adjustments with the following command. Use the correct `-n namespace` option according to the namespace you use for the agent, for -the configuration and in your adjustment specifications. +the configuration, and in your adjustment specifications. ``` kubectl get adjustments.criresmgr.intel.com -n kube-system @@ -186,7 +190,7 @@ kubectl describe adjustments external-adjustment -n kube-system kubectl get adjustments.criresmgr.intel.com/ -n kube-system -oyaml ``` -Or you can examine the contents of all adjustments like this: +Or you can examine the contents of all adjustments using this command: ``` kubectl get adjustments.criresmgr.intel.com -n kube-system -oyaml @@ -199,9 +203,10 @@ kubectl delete -f sample-configs/external-adjustment.yaml kubectl delete adjustments.criresmgr.intel.com/ -n kube-system ``` -The status of adjustment updates is propagated back to the `Adjustment` `Custom Resources`, -more specifically into their `Status` fields. With the help of `jq`, you can easily -examine the status of external adjustments using a command like this: +The status of adjustment updates is propagated back to the `Adjustment` +`Custom Resources`, more specifically into their `Status` fields. With the +help of `jq`, you can easily examine the status of external adjustments +using a command like this: ``` kli@r640-1:~> kubectl get -n kube-system adjustments.criresmgr.intel.com -ojson | jq '.items[].status' @@ -221,11 +226,12 @@ kli@r640-1:~> kubectl get -n kube-system adjustments.criresmgr.intel.com -ojson } ``` -The above response is what you get for adjustments that applied without conflicts or -errors. You can see here that only node *r640-1* is in the scope of both of your -existing adjustments and those applied without errors. +The above response is what you get for adjustments applied without conflicts +or errors. You can see here that only node *r640-1* is in the scope of both +of your existing adjustments and those applied without errors. -If your adjustments resulted in errors, the output will look something like this: +If your adjustments resulted in errors, the output will look something like +this: ``` klitkey1@r640-1:~> kubectl get -n kube-system adjustments.criresmgr.intel.com -ojson | jq '.items[].status' @@ -249,13 +255,14 @@ klitkey1@r640-1:~> kubectl get -n kube-system adjustments.criresmgr.intel.com -o } ``` -Above you can see that on node *r640-1* the container with `ID` -*b71a93523e58cb4ba0310aa225b2e2a329cef895ca4b96fcd9d12b375337ea35*, or *my-container* of -*my-pod-r640-1*, had a conflict. Moreover you can see that the reason of the conflict is -that the container is in the scope of both *adjustment-1* and *adjustment-2*. +In the sample above, you can see that on node *r640-1* the container with +`ID`*b71a93523e58cb4ba0310aa225b2e2a329cef895ca4b96fcd9d12b375337ea35*, or +*my-container* of *my-pod-r640-1*, had a conflict. Moreover you can see that +the reason of the conflict is that the container is in the scope of both +*adjustment-1* and *adjustment-2*. -You can now fix those adjustments to resolve/remove the conflict then reapply the -adjustments, and then verify that the conflicts are gone. +You can now fix those adjustments to resolve/remove the conflict, then +reapply the adjustments, and finally verify that the conflicts are gone. ``` kli@r640-1:~> $EDITOR adjustment-1.yaml adjustment-2.yaml @@ -278,27 +285,34 @@ kli@r640-1:~> kubectl get -n kube-system adjustments.criresmgr.intel.com -ojson ``` -## Using CRI Resource Manager as a Message Dumper +## Using CRI Resource Manager as a message dumper -You can use CRI Resource Manager to simply inspect all proxied CRI requests and -responses without applying any policy. Run CRI Resource Manager with the +You can use CRI Resource Manager to simply inspect all proxied CRI requests +and responses without applying any policy. Run CRI Resource Manager with the provided [sample configuration](/sample-configs/cri-full-message-dump.cfg) for doing this. ## Kata Containers -[Kata Containers](https://katacontainers.io/) is an open source container runtime, -building lightweight virtual machines that seamlessly plug into the containers ecosystem. +[Kata Containers](https://katacontainers.io/) is an open source container +runtime, building lightweight virtual machines that seamlessly plug into the +containers ecosystem. -In order to enable Kata container in a Kubernetes-CRI-RM stack, both Kubernetes -and the Container Runtime need to be aware of the new runtime environment: +In order to enable Kata Containers in a Kubernetes-CRI-RM stack, both +Kubernetes and the Container Runtime need to be aware of the new runtime +environment: - * The Container Runtime can only be CRI-O or containerd, and will need to + * The Container Runtime can only be CRI-O or containerd, and needs to have the runtimes enabled in their configuration files. - * Kubernetes will also have to be made aware of the CRI-O/containerd runtimes via a "RuntimeClass" [resource](https://kubernetes.io/docs/concepts/containers/runtime-class/) + * Kubernetes must be made aware of the CRI-O/containerd runtimes via a + "RuntimeClass" + [resource](https://kubernetes.io/docs/concepts/containers/runtime-class/) -After these prerequisites are satisfied, the configuration file for the target Kata Container, will have to have the flag "SandboxCgroupOnly" set to true. As of Kata 2.0 this is the only way Kata containers can work with the Kubernetes cgroup naming conventions. +After these prerequisites are satisfied, the configuration file for the +target Kata Container, must have the flag "SandboxCgroupOnly" set to true. +As of Kata 2.0 this is the only way Kata Containers can work with the +Kubernetes cgroup naming conventions. ```toml ... @@ -310,11 +324,12 @@ After these prerequisites are satisfied, the configuration file for the target # See: https://godoc.org/github.com/kata-containers/runtime/virtcontainers#ContainerType sandbox_cgroup_only=true ... - ``` + ``` ### Reference -If you have a pre-existing Kubernetes cluster, for an easy deployement follow this [document](https://github.com/kata-containers/packaging/blob/master/kata-deploy/README.md#kubernetes-quick-start). +If you have a pre-existing Kubernetes cluster, for an easy deployement +follow this [document](https://github.com/kata-containers/packaging/blob/master/kata-deploy/README.md#kubernetes-quick-start). Starting from scratch: @@ -326,20 +341,22 @@ Starting from scratch: * [Cgroup and Kata containers](https://github.com/kata-containers/kata-containers/blob/stable-2.0.0/docs/design/host-cgroups.md) -## Using Docker as the Runtime +## Using Docker\* as the runtime -If you must use `docker` as the runtime then the proxying setup is slightly more -complex. Docker does not natively support the CRI API. Normally kubelet runs an -internal protocol translator, `dockershim` to translate between CRI and the -native docker API. To let CRI Resource Manager effectively proxy between kubelet -and `docker` it needs to actually proxy between kubelet and `dockershim`. For this to -be possible, you need to run two instances of kubelet: +If you must use `docker` as the runtime then the proxying setup is slightly +more complex. Docker does not natively support the CRI API. Normally kubelet +runs an internal protocol translator, `dockershim` to translate between CRI +and the native docker API. To let CRI Resource Manager effectively proxy +between kubelet and `docker` it needs to actually proxy between kubelet and +`dockershim`. For this to be possible, you need to run two instances of +kubelet: - 1. real instance talking to CRI Resource Manager/CRI - 2. dockershim instance, acting as a CRI-docker protocol translator + 1. The real instance, talking to CRI Resource Manager/CRI + 2. The dockershim instance, acting as a CRI-docker protocol translator -The real kubelet instance you run as you would normally with any other real CRI -runtime, but you specify the dockershim socket for the CRI Image Service: +Run the real kubelet instance as you would normally with any other real CRI +runtime, but specify the dockershim socket for the CRI Image Service, as +shown below: ``` kubelet --container-runtime=remote \ @@ -347,15 +364,15 @@ runtime, but you specify the dockershim socket for the CRI Image Service: --image-service-endpoint=unix:///var/run/dockershim.sock ``` -The dockershim instance you run like this, picking the cgroupfs driver according -to your real kubelet instance's configuration: +Run the dockershim instance as shown below, picking the cgroupfs driver +according to the configuration of the real kubelet instance: ``` kubelet --experimental-dockershim --port 11250 --cgroup-driver {systemd|cgroupfs} ``` -## Logging and Debugging +## Logging and debugging You can control logging with the klog command line options or by setting the corresponding environment variables. You can get the name of the environment @@ -368,7 +385,7 @@ Additionally, the `LOGGER_DEBUG` environment variable controls debug logs. These are globally disabled by default. You can turn on full debugging by setting `LOGGER_DEBUG='*'`. -When using environment variables be careful what configuration you pass to +When using environment variables, be careful which configuration you pass to CRI Resource Manager using a file or ConfigMap. The environment is treated as default configuration but a file or a ConfigMap has higher precedence. If something is configured in both, the environment will only be in effect @@ -379,8 +396,8 @@ again. For debug logs, the settings from the configuration are applied in addition to any settings in the environment. That said, if you turn something on in -the environment but off in the configuration, it will be eventually turned -off. +the environment but off in the configuration, it will be turned off +eventually. [agent]: node-agent.md diff --git a/docs/webhook.md b/docs/webhook.md index 46bba82bf..3a84ee4f7 100644 --- a/docs/webhook.md +++ b/docs/webhook.md @@ -1,23 +1,29 @@ # Webhook -By default CRI Resource Manager does not see the original container *resource -requirements* specified in the *Pod Spec*. It tries to calculate these for `cpu` -and `memory` *compute resource*s using the related parameters present in the -CRI container creation request. The resulting estimates are normally accurate -for `cpu`, and also for `memory` `limits`. However, it is not possible to use -these parameters to estimate `memory` `request`s or any *extended resource*s. +By default, CRI Resource Manager does not see the original container +*resource requirements* specified in the *Pod Spec*. It tries to calculate +these for `cpu` and `memory` *compute resource*s using the related parameters +present in the CRI container creation request. The resulting estimates are +normally accurate for `cpu`, and also for `memory` `limits`. However, it is +not possible to use these parameters to estimate `memory` `request`s or any +*extended resource*s. If you want to make sure that CRI Resource Manager uses the origin *Pod Spec* -*resource requirement*s, you need to duplicate these as *annotations* on the Pod. -This is necessary if you plan using or writing a policy which needs *extended -resource*s. +*resource requirement*s, you need to duplicate these as *annotations* on the +Pod. This is necessary if you plan using or writing a policy which needs +*extended resource*s. -This process can be fully automated using the [CRI Resource Manager Annotating -Webhook](/cmd/cri-resmgr-webhook). Once you built the docker image for it using -the [provided Dockerfile](/cmd/cri-resmgr-webhook/Dockerfile) and published it, +This process can be fully automated using the +[CRI Resource Manager Annotating Webhook](/cmd/cri-resmgr-webhook). Once you +built the Docker\* image for it using the +[provided Dockerfile](/cmd/cri-resmgr-webhook/Dockerfile) and published it, you can set up the webhook as follows: -- Fill in the `IMAGE_PLACEHOLDER` in [webhook-deployment.yaml](/cmd/cri-resmgr-webhook/webhook-deployment.yaml) to match the image. -- Create a `cri-resmgr-webhook-secret` that carries a key and a certificate to `cri-resmgr-webhook`. You can create a key, a self-signed certificate and the secret that holds them with commands: +- Fill in the `IMAGE_PLACEHOLDER` in + [webhook-deployment.yaml](/cmd/cri-resmgr-webhook/webhook-deployment.yaml) + to match the image. +- Create a `cri-resmgr-webhook-secret` that carries a key and a certificate + to `cri-resmgr-webhook`. You can create a key, a self-signed certificate + and the secret that holds them with the following commands: ```bash SVC=cri-resmgr-webhook NS=cri-resmgr openssl req -x509 -newkey rsa:2048 -sha256 -days 365 -nodes \ @@ -39,9 +45,10 @@ you can set up the webhook as follows: kubectl create namespace $NS kubectl create -f cmd/cri-resmgr-webhook/webhook-secret.yaml ``` -- Fill in the `CA_BUNDLE_PLACEHOLDER` in [mutating-webhook-config.yaml](/cmd/cri-resmgr-webhook/mutating-webhook-config.yaml). +- Fill in the `CA_BUNDLE_PLACEHOLDER` in + [mutating-webhook-config.yaml](/cmd/cri-resmgr-webhook/mutating-webhook-config.yaml). If you created the key and the certificate with the commands above, - you can do this with command: + you can do this with the following command: ```bash sed -e "s/CA_BUNDLE_PLACEHOLDER/$(base64 -w0 < cmd/cri-resmgr-webhook/server-crt.pem)/" \ -i cmd/cri-resmgr-webhook/mutating-webhook-config.yaml