Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revamp compute resources documentation #4770

Merged
merged 1 commit into from
May 3, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
236 changes: 236 additions & 0 deletions docs/compute-resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,236 @@
<!--
---
linkTitle: "LimitRange"
weight: 300
---
-->

# Compute Resources in Tekton

## Background: Resource Requirements in Kubernetes

Kubernetes allows users to specify CPU, memory, and ephemeral storage constraints
for [containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
Resource requests determine the resources reserved for a pod when it's scheduled,
and affect likelihood of pod eviction. Resource limits constrain the maximum amount of
a resource a container can use. A container that exceeds its memory limits will be killed,
and a container that exceeds its CPU limits will be throttled.

A pod's [effective resource requests and limits](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#resources)
are the higher of:
- the sum of all app containers request/limit for a resource
- the effective init container request/limit for a resource

This formula exists because Kubernetes runs init containers sequentially and app containers
in parallel. (There is no distinction made between app containers and sidecar containers
in Kubernetes; a sidecar is used in the following example to illustrate this.)

For example, consider a pod with the following containers:

| Container | CPU request | CPU limit |
| ------------------- | ----------- | --------- |
| init container 1 | 1 | 2 |
| init container 2 | 2 | 3 |
| app container 1 | 1 | 2 |
| app container 2 | 2 | 3 |
| sidecar container 1 | 3 | no limit |

The sum of all app container CPU requests is 6 (including the sidecar container), which is
greater than the maximum init container CPU request (2). Therefore, the pod's effective CPU
request will be 6.

Since the sidecar container has no CPU limit, this is treated as the highest CPU limit.
Therefore, the pod will have no effective CPU limit.

## Task Resource Requirements

Tekton allows users to specify resource requirements of [`Steps`](./tasks.md#defining-steps),
which run sequentially. However, the pod's effective resource requirements are still the
sum of its containers' resource requirements. This means that when specifying resource
requirements for `Step`containers, they must be treated as if they are running in parallel.

Tekton adjusts `Step` resource requirements to comply with [LimitRanges](#limitrange-support).
[ResourceQuotas](#resourcequota-support) are not currently supported.

## LimitRange Support

Kubernetes allows users to configure [LimitRanges]((https://kubernetes.io/docs/concepts/policy/limit-range/)),
which constrain compute resources of pods, containers, or PVCs running in the same namespace.

LimitRanges can:
- Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
- Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
- Enforce a ratio between request and limit for a resource in a namespace.
- Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.

Tekton applies the resource requirements specified by users directly to the containers
in a `Task's` pod, unless there is a LimitRange present in the namespace.
(Tekton doesn't allow users to configure init containers for a `Task`.)
Tekton supports LimitRange minimum, maximum, and default resource requirements for containers,
but does not support LimitRange ratios between requests and limits ([#4230](https://github.com/tektoncd/pipeline/issues/4230)).
LimitRange types other than "Container" are not supported.

### Requests

If a `Step` does not have requests defined, the resulting container's requests are the larger of:
- the LimitRange minimum resource requests
- the LimitRange default resource requests, divided among the app containers

If a `Step` has requests defined, the resulting container's requests are the larger of:
- the `Step's` requests
- the LimitRange minimum resource requests

### Limits

If a `Step` does not have limits defined, the resulting container's limits are the smaller of:
- the LimitRange maximum resource limits
- the LimitRange default resource limits

If a `Step` has limits defined, the resulting container's limits are the smaller of:
- the `Step's` limits
- the LimitRange maximum resource limits

### Examples

Consider the following LimitRange:

```
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-example
spec:
limits:
- default: # The default limits
cpu: 2
defaultRequest: # The default requests
cpu: 1
max: # The maximum limits
cpu: 3
min: # The minimum requests
cpu: 300m
type: Container
```

A `Task` with 2 `Steps` and no resources specified would result in a pod with the following containers:

| Container | CPU request | CPU limit |
| ------------ | ----------- | --------- |
| container 1 | 500m | 2 |
| container 2 | 500m | 2 |

Here, the default CPU request was divided among app containers, and this value was used since it was greater
than the minimum request specified by the LimitRange.
The CPU limits are 2 for each container, as this is the default limit specifed in the LimitRange.

Now, consider a `Task` with the following `Step`s:

| Step | CPU request | CPU limit |
| ------ | ----------- | --------- |
| step 1 | 200m | 2 |
| step 2 | 1 | 4 |

The resulting pod would have the following containers:

| Container | CPU request | CPU limit |
| ------------ | ----------- | --------- |
| container 1 | 300m | 2 |
| container 2 | 1 | 3 |

Here, the first `Step's` request was less than the LimitRange minimum, so the output request is the minimum (300m).
The second `Step's` request is unchanged. The first `Step's` limit is less than the maximum, so it is unchanged,
while the second `Step's` limit is greater than the maximum, so the maximum (3) is used.

### Support for multiple LimitRanges

Tekton supports running `TaskRuns` in namespaces with multiple LimitRanges.
For a given resource, the minumum used will be the largest of any of the LimitRanges' minimum values,
and the maximum used will be the smallest of any of the LimitRanges' maximum values.

The minimum resource requirement used will be the largest of any minimum for that resource,
and the maximum resource requirement will be the smallest of any of the maximum values defined.
The default value will be the minimum of any default values defined.
If the resulting default value is less than the resulting minimum value, the default value will be the minimum value.

It's possible for multiple LimitRanges to be defined which are not compatible with each other, preventing pods from being scheduled.

#### Example

Consider a namespaces with the following LimitRanges defined:

```
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-1
spec:
limits:
- default: # The default limits
cpu: 2
defaultRequest: # The default requests
cpu: 750m
max: # The maximum limits
cpu: 3
min: # The minimum requests
cpu: 500m
type: Container
```

```
apiVersion: v1
kind: LimitRange
metadata:
name: limitrange-2
spec:
limits:
- default: # The default limits
cpu: 1.5
defaultRequest: # The default requests
cpu: 1
max: # The maximum limits
cpu: 2.5
min: # The minimum requests
cpu: 300m
type: Container
```

A namespace with limitrange-1 and limitrange-2 would be treated as if it contained only the following LimitRange:

```
apiVersion: v1
kind: LimitRange
metadata:
name: aggregate-limitrange
spec:
limits:
- default: # The default limits
cpu: 1.5
defaultRequest: # The default requests
cpu: 750m
max: # The maximum limits
cpu: 2.5
min: # The minimum requests
cpu: 300m
type: Container
```

Here, the minimum of the "max" values is the output "max" value, and likewise for "default" and "defaultRequest".
The maximum of the "min" values is the output "min" value.

## ResourceQuota Support

Kubernetes allows users to define [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/),
which restrict the maximum resource requests and limits of all pods running in a namespace.
`TaskRuns` can't currently be created in a namespace with ResourceQuotas
([#2933](https://github.com/tektoncd/pipeline/issues/2933)).

# References

- [LimitRange in k8s docs](https://kubernetes.io/docs/concepts/policy/limit-range/)
- [Configure default memory requests and limits for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)
- [Configure default CPU requests and limits for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/)
- [Configure Minimum and Maximum CPU constraints for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/)
- [Configure Minimum and Maximum Memory constraints for a Namespace](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/)
- [Managing Resources for Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)
- [Kubernetes best practices: Resource requests and limits](https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits)
- [Restrict resource consumption with limit ranges](https://docs.openshift.com/container-platform/4.8/nodes/clusters/nodes-cluster-limit-ranges.html)
105 changes: 0 additions & 105 deletions docs/limitrange.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -506,7 +506,7 @@ time from the invoked `Task`, Tekton will request the compute values for CPU, me
storage for each `Step` based on the [`LimitRange`](https://kubernetes.io/docs/concepts/policy/limit-range/)
object(s), if present. Any `Request` or `Limit` specified by the user (on `Task` for example) will be left unchanged.

For more information, see the [`LimitRange` support in Pipeline](./limitrange.md).
For more information, see the [`LimitRange` support in Pipeline](./compute-resources.md#limitrange-support).

### Configuring a failure timeout

Expand Down
2 changes: 1 addition & 1 deletion docs/taskruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -469,7 +469,7 @@ time from the invoked `Task`, Tekton will requests the compute values for CPU, m
storage for each `Step` based on the [`LimitRange`](https://kubernetes.io/docs/concepts/policy/limit-range/)
object(s), if present. Any `Request` or `Limit` specified by the user (on `Task` for example) will be left unchanged.

For more information, see the [`LimitRange` support in Pipeline](./limitrange.md).
For more information, see the [`LimitRange` support in Pipeline](./compute-resources.md#limitrange-support).

### Configuring the failure timeout

Expand Down
5 changes: 4 additions & 1 deletion docs/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,10 @@ The following requirements apply to each container image referenced in a `steps`
- Each container image runs to completion or until the first failure occurs.
- The CPU, memory, and ephemeral storage resource requests set on `Step`s
will be adjusted to comply with any [`LimitRange`](https://kubernetes.io/docs/concepts/policy/limit-range/)s
present in the `Namespace`. For more detail, see [LimitRange support in Pipeline](./limitrange.md).
present in the `Namespace`. In addition, Kubernetes determines a pod's effective resource
requests and limits by summing the requests and limits for all its containers, even
though Tekton runs `Steps` sequentially.
For more detail, see [Compute Resources in Tekton](./compute-resources.md).

Below is an example of setting the resource requests and limits for a step:

Expand Down