From 00032be0a304fb35a3015f3ab46deeb4e40cbf97 Mon Sep 17 00:00:00 2001 From: cooper-lzy <78672629+cooper-lzy@users.noreply.github.com> Date: Thu, 17 Nov 2022 17:27:46 +0800 Subject: [PATCH] add hdfs error for dag (#1761) --- .../nebula-explorer/deploy-connect/ex-ug-deploy.md | 6 ++++++ docs-2.0/nebula-explorer/faq.md | 13 +++++++++++++ .../nebula-explorer/workflow/1.prepare-resources.md | 4 +++- 3 files changed, 22 insertions(+), 1 deletion(-) diff --git a/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md b/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md index 8e7de25876c..7d4da6160b2 100644 --- a/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md +++ b/docs-2.0/nebula-explorer/deploy-connect/ex-ug-deploy.md @@ -29,6 +29,12 @@ Before deploying Explorer, you must check the following information: License is only available in the Enterprise Edition. To obtain the license, apply for [NebulaGraph Explorer Free Trial](https://nebula-graph.io/visualization-tools-free-trial). +- The HDFS services are deployed if graph computing is required. The namenode uses port 8020 by default, and the datanode uses port 50010 by default. + + !!! caution + + If the HDFS port is unavailable, the connection timeout message may be displayed. + ## RPM-based deployment ### Installation diff --git a/docs-2.0/nebula-explorer/faq.md b/docs-2.0/nebula-explorer/faq.md index c8279589bb3..a78fd63fc2e 100644 --- a/docs-2.0/nebula-explorer/faq.md +++ b/docs-2.0/nebula-explorer/faq.md @@ -68,6 +68,19 @@ Check according to the following procedure: 3. Restart the Dag Controller for the settings to take effect. +## How to resolve the error `no available namenodes: dial tcp xx.xx.xx.xx:8020: connect: connection timed out`? + +Check whether the HDFS namenode port 8020 is open. + +## How to resolve the error `org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout`? + +Check whether the HDFS datanode port 50010 is open. + +If the port is not opened, an error similar to the following may be reported: + +- `Check failed: false close hdfs-file failed` +- `org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /analytics/xx/tasks/analytics_xxx/xxx.csv could only be replicated to 0 nodes instead of minReplication` + ## How to resolve the error `broadcast.hpp:193] Check failed: (size_t)recv_bytes >= sizeof(chunk_tail_t) recv message too small: 0`? The amount of data to be processed is too small, but the number of compute nodes and processes is too large. Smaller `clusterSize` and `processes` need to be set when submitting jobs. diff --git a/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md b/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md index f324e5e9916..275d772d1cc 100644 --- a/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md +++ b/docs-2.0/nebula-explorer/workflow/1.prepare-resources.md @@ -16,10 +16,12 @@ You must prepare your environment for running a workflow, including NebulaGraph 3. Configure the following resources: + ![workflow_configuration](https://docs-cdn.nebula-graph.com.cn/figures/workflow_configuration_221117_en.png) + |Type|Description| |:--|:--| |NebulaGraph Configuration| The access address of the graph service that executes a graph query or to which the graph computing result is written. The default address is the address that you use to log into Explorer and can not be changed. You can set timeout periods for three services. | - |HDFS Configuration| The HDFS address that stores the result of the graph query or graph computing (`fs.defaultFS`). Click **Add** to add a new address, you can set the HDFS name, HDFS path, and HDFS username (optional). The configuration takes effect only after the HDFS client is installed on the machine where the Analytics is installed. | + |HDFS Configuration| The HDFS address that stores the result of the graph query or graph computing. Click **Add** to add a new address, you can set the HDFS name, HDFS path (`fs.defaultFS`), and HDFS username. You can configure the save path, such as `hdfs://192.168.8.100:9000/test`. The configuration takes effect only after the HDFS client is installed on the machine where the Analytics is installed. | |NebulaGraph Analytics Configuration| The NebulaGraph Analytics address that performs the graph computing. Click **Add** to add a new address.| 4. Click **Confirm**.