Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update docs #407

Merged
merged 2 commits into from
Dec 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 0 additions & 12 deletions FAQ.md

This file was deleted.

4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ database.

### install nebula operator

See [install/uninstall nebula operator](doc/user/install_guide.md) .
See [install/uninstall nebula operator](doc/user/operator_guide.md) .

### Create and destroy a nebula cluster

Expand Down Expand Up @@ -140,7 +140,7 @@ nebula-storaged-1 1/1 Running 0 10m
nebula-storaged-2 1/1 Running 0 10m
```

In addition, you can [Install Nebula Cluster with helm](doc/user/nebula_cluster_helm_guide.md).
In addition, you can [Install Nebula Cluster with helm](doc/user/nebula_cluster_guide.md).

### Upgrade a nebula cluster

Expand Down
2 changes: 1 addition & 1 deletion config/samples/local-pv-storage.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ spec:
serviceAccountName: local-pv-provisioner-sa
containers:
- name: local-pv-provisioner
image: reg.vesoft-inc.com/cloud-dev/local-pv-provisioner
image: vesoft/local-pv-provisioner
imagePullPolicy: IfNotPresent
command:
- local-pv-provisioner
Expand Down
2 changes: 1 addition & 1 deletion doc/user/add-ons.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Installing Add-ons
## Installing Add-ons

**Caution:**
This section links to third party projects that provide functionality required by nebula-operator. The nebula-operator
Expand Down
2 changes: 1 addition & 1 deletion doc/user/br_guide.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Backup & Restore
## Backup & Restore

### Requirements

Expand Down
4 changes: 2 additions & 2 deletions doc/user/client_service.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# nebula client service
## nebula client service

For every nebula cluster created, the nebula operator will create a graphd service in the same namespace with the name `<cluster-name>-graphd-svc`.

Expand All @@ -15,7 +15,7 @@ The client service is of type `ClusterIP` and accessible only from within the Ku
For example, access the service from a pod in the cluster:

```shell script
$ kubectl run --rm -ti --image vesoft/nebula-console:v3.5.0 --restart=Never -- /bin/sh
$ kubectl run --rm -ti --image vesoft/nebula-console:v3.6.0 --restart=Never -- /bin/sh
/ # nebula-console -u user -p password --address=nebula-graphd-svc --port=9669
2021/04/12 08:16:30 [INFO] connection pool is initialized successfully

Expand Down
6 changes: 2 additions & 4 deletions doc/user/custom_config.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Configure custom flags
## Configure custom flags

### Apply custom flags

Expand Down Expand Up @@ -27,7 +27,7 @@ spec:
memory: "1Gi"
replicas: 1
image: vesoft/nebula-graphd
version: v3.5.0
version: v3.6.0
storageClaim:
resources:
requests:
Expand Down Expand Up @@ -73,5 +73,3 @@ This a dynamic runtime flags table, the pod rolling update will not be triggered
| `rebuild_index_part_rate_limit` | The rate limit in bytes when leader synchronizes rebuilding index | `4194304` |
| `prioritize_intra_zone_reading` | Prioritize to send read queries to storaged in the same zone | `false` |
| `stick_to_intra_zone_on_failure` | Stick to intra zone routing if unable to find the storaged hosting the requested part in the same zone. | `false` |
| `sync_meta_when_use_space` | Whether to sync session to meta when use space | `false` |
| `validate_session_timestamp` | whether validate the timestamp when update session | `true` |
4 changes: 2 additions & 2 deletions doc/user/intra_zone.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ spec:
storageClassName: local-path
replicas: 1
image: reg.vesoft-inc.com/nebula-graphd-ent
version: v3.5.0
version: v3.6.0
metad:
config:
# A list of zones (e.g., AZ) split by comma
Expand All @@ -53,7 +53,7 @@ spec:
memory: "1Gi"
replicas: 1
image: reg.vesoft-inc.com/nebula-metad-ent
version: v3.5.0
version: v3.6.0
dataVolumeClaim:
resources:
requests:
Expand Down
40 changes: 40 additions & 0 deletions doc/user/local_pv_failover.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
## local PV failover

Compared to network block storage, local storage is physically mounted to the virtual machine instance, providing superior
I/O operations per second (IOPS), and very low read-write latency due to this tight coupling. While using local storage
can enhance performance, it also requires certain trade-offs in terms of service availability, data persistence, and flexibility.
Because of these factors, local storage does not automatically replicate, so all data on local storage may be lost if the
virtual machine stops or is terminated for any reason. Nebula-graph's storage service itself has data redundancy capabilities,
storing three replicas for each partition. If a node fails, the partition associated with this node will re-elect a leader,
and the read-write operations will automatically shift to the healthy node. When using network block storage, if a machine fails,
the Pod can be rescheduled to a new machine and mount the original storage volume. But for local storage, due to node affinity
constraints, the Pod can only be in a Pending state affecting the overall availability of the cluster without unbinding the storage volume.

### Solution
- nebula cluster status check is divided into two parts: the status of nebula service and k8s Node status. The nebula service will periodically
report heartbeat to the Meta service, so you can check the status of each service in the current cluster through the interface
exposed by the Meta service.
- When the host status of the Storaged service is found to be "OFFLINE", update the host with the status of "OFFLINE"
to the status field of the resource object NebulaCluster for later fault transfer.
- In order to prevent misjudgment of the check, it is allowed to set the tolerance OFFLINE time, and the fault transfer
process will be executed after exceeding this period.

The controller will perform the following operations while executing automatic failover:
- Attempt to restart the offline Storaged Pod
- Verify if the offline Storaged host has returned to its normal state. If it has, the subsequent steps will be skipped
- Submit balance data job to remove offline Storaged host
- Delete the offline Storaged Pod and its associated PVC, which will schedule the Pod on other nodes
- Submit balance data job to balance partitions to the newly created Pod


Here is the configuration file for NebulaCluster, which enables automatic failover in local PV scenarios.
```yaml
spec:
# Enable failover
enableAutoFailover: true
# Duration for automatic failover after the storaged host is offline
# default 5m
failoverPeriod: "2m"
```


8 changes: 4 additions & 4 deletions doc/user/log_guide.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
### Log rotation
## Log rotation

We use the sidecar container to clean NebulaGraph logs and run logs archiving tasks every hour.

Expand Down Expand Up @@ -53,7 +53,7 @@ spec:
service:
externalTrafficPolicy: Local
type: NodePort
version: v3.5.0
version: v3.6.0
imagePullPolicy: Always
metad:
config:
Expand All @@ -71,7 +71,7 @@ spec:
requests:
cpu: 500m
memory: 500Mi
version: v3.5.0
version: v3.6.0
reference:
name: statefulsets.apps
version: v1
Expand All @@ -94,6 +94,6 @@ spec:
requests:
cpu: 500m
memory: 500Mi
version: v3.5.0
version: v3.6.0
unsatisfiableAction: ScheduleAnyway
```
2 changes: 1 addition & 1 deletion doc/user/nebula_autoscaler.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# nebula-autoscaler
## nebula-autoscaler

The nebula-autoscaler is fully compatible with K8S HPA, and you can use it according to the operating mechanism of HPA.
Currently, the nebula-autoscaler only supports automatic scaling of Graphd.
Expand Down
Loading