Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
archlitchi committed May 24, 2024
1 parent ec4ff8c commit 3381d44
Show file tree
Hide file tree
Showing 8 changed files with 112 additions and 91 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,58 +2,4 @@
title: Offline Installation
---

This document describes how you can use the `hack/remote-up-karmada.sh` script to install Karmada on
your clusters based on the codebase.

## Select a way to expose karmada-apiserver

The `hack/remote-up-karmada.sh` will install `karmada-apiserver` and provide two ways to expose the server:

### 1. expose by `HostNetwork` type

By default, the `hack/remote-up-karmada.sh` will expose `karmada-apiserver` by `HostNetwork`.

No extra operations needed with this type.

### 2. expose by service with `LoadBalancer` type

If you don't want to use the `HostNetwork`, you can ask `hack/remote-up-karmada.sh` to expose `karmada-apiserver`
by a service with `LoadBalancer` type that *requires your cluster have deployed the `Load Balancer`*.
All you need to do is set an environment:
```bash
export LOAD_BALANCER=true
```

## Install
From the `root` directory the `karmada` repo, install Karmada by command:
```bash
hack/remote-up-karmada.sh <kubeconfig> <context_name>
```
- `kubeconfig` is your cluster's kubeconfig that you want to install to
- `context_name` is the name of context in 'kubeconfig'

For example:
```bash
hack/remote-up-karmada.sh $HOME/.kube/config mycluster
```

If everything goes well, at the end of the script output, you will see similar messages as follows:
```
------------------------------------------------------------------------------------------------------
█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
------------------------------------------------------------------------------------------------------
Karmada is installed successfully.
Kubeconfig for karmada in file: /root/.kube/karmada.config, so you can run:
export KUBECONFIG="/root/.kube/karmada.config"
Or use kubectl with --kubeconfig=/root/.kube/karmada.config
Please use 'kubectl config use-context karmada-apiserver' to switch the cluster of karmada control plane
And use 'kubectl config use-context your-host' for debugging karmada installation
```
TODO
52 changes: 20 additions & 32 deletions versioned_docs/version-v1.3.0/installation/online-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,52 +2,40 @@
title: Online Installation from Helm (Recommended)
---

You can install `kubectl-karmada` plug-in in any of the following ways:
The best practice to deploy HAMi is using helm.

- Download from the release.
- Install using Krew.
- Build from source code.
## Add HAMi repo

## Prerequisites
You can add HAMi chart repository using the following command:

### kubectl
`kubectl` is the Kubernetes command line tool lets you control Kubernetes clusters.
For installation instructions see [installing kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).

## Download from the release

Karmada provides `kubectl-karmada` plug-in download service since v0.9.0. You can choose proper plug-in version which fits your operator system form [karmada release](https://github.com/karmada-io/karmada/releases).

Take v1.2.1 that working with linux-amd64 os as an example:

```bash
wget https://github.com/karmada-io/karmada/releases/download/v1.2.1/kubectl-karmada-linux-amd64.tgz

tar -zxf kubectl-karmada-linux-amd64.tgz
```
helm repo add hami-charts https://project-hami.github.io/HAMi/
```

Next, move `kubectl-karmada` executable file to `PATH` path, reference from [Installing kubectl plugins](https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#installing-kubectl-plugins).
## Get your kubernetes version

## Install using Krew
kubenetes version is needed for properly installation. You can get this information by using the following command:

Krew is the plugin manager for `kubectl` command-line tool.
```
kubectl version
```

[Install and set up](https://krew.sigs.k8s.io/docs/user-guide/setup/install/) Krew on your machine.
## Installation

Then install `kubectl-karmada` plug-in:
During installation, set the Kubernetes scheduler image version to match your Kubernetes server version. For instance, if your cluster server version is 1.16.8, use the following command for deployment:

```bash
kubectl krew install karmada
```
helm install hami hami-charts/hami --set scheduler.kubeScheduler.imageTag=v1.16.8 -n kube-system
```

You can refer to [Quickstart of Krew](https://krew.sigs.k8s.io/docs/user-guide/quickstart/) for more information.
You can customize your installation by adjusting the [configs](../userguide/configure.md).

## Build from source code
## Verify your installation

Clone karmada repo and run `make` cmd from the repository:
You can verify your installation using the following command:

```bash
make kubectl-karmada
```
kubectl get pods -n kube-system
```

Next, move the `kubectl-karmada` executable file under the `_output` folder in the project root directory to the `PATH` path.
If both hami-device-plugin and hami-scheduler pods are in the Running state, your installation is successful.
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,11 @@ title: Enable cambricon MLU sharing

* Install the chart using helm, See 'enabling vGPU support in kubernetes' section [here](https://github.com/Project-HAMi/HAMi#enabling-vgpu-support-in-kubernetes)

* Tag MLU node with the following command
* Activate the smlu mode for each MLUs on that node
```
kubectl label node {mlu-node} mlu=on
cnmon set -c 0 -smlu on
cnmon set -c 1 -smlu on
...
```

## Running MLU jobs
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
title: Allocate device core and memory resource
---

## Allocate device core and memory to container

To allocate a certain part of device core resource, you need only to assign the `cambricon.com/mlu370.smlu.vmemory` and `cambricon.com/mlu370.smlu.vcore` along with the number of cambricon MLUs you requested in the container using `cambricon.com/vmlu`

```
apiVersion: apps/v1
kind: Deployment
metadata:
name: binpack-1
labels:
app: binpack-1
spec:
replicas: 1
selector:
matchLabels:
app: binpack-1
template:
metadata:
labels:
app: binpack-1
spec:
containers:
- name: c-1
image: ubuntu:18.04
command: ["sleep"]
args: ["100000"]
resources:
limits:
cambricon.com/vmlu: "1"
cambricon.com/mlu370.smlu.vmemory: "20"
cambricon.com/mlu370.smlu.vcore: "10"
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
title: Allocate exclusive device
---

## Allocate exclusive device

To allocate a whole cambricon device, you need to only assign `cambricon.com/vmlu` without other fields.

```
apiVersion: apps/v1
kind: Deployment
metadata:
name: binpack-1
labels:
app: binpack-1
spec:
replicas: 1
selector:
matchLabels:
app: binpack-1
template:
metadata:
labels:
app: binpack-1
spec:
containers:
- name: c-1
image: ubuntu:18.04
command: ["sleep"]
args: ["100000"]
resources:
limits:
cambricon.com/vmlu: "1" #allocates a whole MLU
```
14 changes: 14 additions & 0 deletions versioned_docs/version-v1.3.0/userguide/Device-supported.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
title: Device supported by HAMi
---

The view of device supported by HAMi is shown in this table below:

| Production | manufactor | Type |MemoryIsolation | CoreIsolation | MultiCard support |
|-------------|------------|-------------|-----------|---------------|-------------------|
| GPU | NVIDIA | All ||||
| MLU | Cambricon | 370, 590 ||||
| DCU | Hygon | Z100, Z100L ||||
| Ascend | Huawei | 910B ||||
| GPU | iluvatar | All ||||
| DPU | Teco | Checking | In progress | In progress ||
Empty file.
5 changes: 3 additions & 2 deletions versioned_sidebars/version-v1.3.0-sidebars.json
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"label": "User Guide",
"items": [
"version-v1.3.0/userguide/configure",
"version-v1.3.0/userguide/support-devices",
"version-v1.3.0/userguide/Device-supported",
{
"type": "category",
"label": "Monitoring",
Expand Down Expand Up @@ -78,7 +78,8 @@
"type": "category",
"label": "Examples",
"items": [
"version-v1.3.0/userguide/monitoring/globalview"
"version-v1.3.0/userguide/Cambricon-device/examples/allocate-core-and-memory",
"version-v1.3.0/userguide/Cambricon-device/examples/allocate-exclusive"
]
}
]
Expand Down

0 comments on commit 3381d44

Please sign in to comment.