diff --git a/docs/_includes/themes/zeppelin/_navigation.html b/docs/_includes/themes/zeppelin/_navigation.html
index dc2ba4c458a..f222b8c4829 100644
--- a/docs/_includes/themes/zeppelin/_navigation.html
+++ b/docs/_includes/themes/zeppelin/_navigation.html
@@ -107,7 +107,7 @@
Zeppelin on Spark Cluster Mode (Standalone)
Zeppelin on Spark Cluster Mode (YARN)
Zeppelin on Spark Cluster Mode (Mesos)
- Zeppelin on CDH
+ Zeppelin on CDH
Contibute
Writing Zeppelin Interpreter
diff --git a/docs/assets/themes/zeppelin/img/docs-img/zeppelin_with_cdh.png b/docs/assets/themes/zeppelin/img/docs-img/zeppelin_with_cdh.png
new file mode 100644
index 00000000000..415c572e09f
Binary files /dev/null and b/docs/assets/themes/zeppelin/img/docs-img/zeppelin_with_cdh.png differ
diff --git a/docs/index.md b/docs/index.md
index 28fa59cf691..4c6f3d1e963 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -172,7 +172,7 @@ Join to our [Mailing list](https://zeppelin.apache.org/community.html) and repor
* [Zeppelin on Spark Cluster Mode (Standalone via Docker)](./install/spark_cluster_mode.html#spark-standalone-mode)
* [Zeppelin on Spark Cluster Mode (YARN via Docker)](./install/spark_cluster_mode.html#spark-on-yarn-mode)
* [Zeppelin on Spark Cluster Mode (Mesos via Docker)](./install/spark_cluster_mode.html#spark-on-mesos-mode)
- * [Zeppelin on CDH (via Docker)](./install/spark_cluster_mode.html#zeppelin-on-cdh)
+ * [Zeppelin on CDH (via Docker)](./install/cdh.html#apache-zeppelin-on-cdh)
* Contribute
* [Writing Zeppelin Interpreter](./development/writingzeppelininterpreter.html)
* [Writing Zeppelin Application (Experimental)](./development/writingzeppelinapplication.html)
diff --git a/docs/install/cdh.md b/docs/install/cdh.md
new file mode 100644
index 00000000000..4c2fdbf1eb9
--- /dev/null
+++ b/docs/install/cdh.md
@@ -0,0 +1,95 @@
+---
+layout: page
+title: "Apache Zeppelin on CDH"
+description: "This document will guide you how you can build and configure the environment on CDH with Apache Zeppelin using docker scripts."
+group: install
+---
+
+{% include JB/setup %}
+
+# Apache Zeppelin on CDH
+
+Cloudera officially provide docker container so we can easily build CDH docker environment following the [link](http://www.cloudera.com/documentation/enterprise/latest/topics/quickstart_docker_container.html).
+
+### 1. Importing the Cloudera QuickStart Docker Image
+
+```
+docker pull cloudera/quickstart:latest
+```
+
+
+### 2. Run docker
+
+```
+docker run -it \
+ -p 80:80 \
+ -p 4040:4040 \
+ -p 8020:8020 \
+ -p 8022:8022 \
+ -p 8030:8030 \
+ -p 8032:8032 \
+ -p 8033:8033 \
+ -p 8040:8040 \
+ -p 8042:8042 \
+ -p 8088:8088 \
+ -p 8480:8480 \
+ -p 8485:8485 \
+ -p 8888:8888 \
+ -p 9083:9083 \
+ -p 10020:10020 \
+ -p 10033:10033 \
+ -p 18088:18088 \
+ -p 19888:19888 \
+ -p 25000:25000 \
+ -p 25010:25010 \
+ -p 25020:25020 \
+ -p 50010:50010 \
+ -p 50020:50020 \
+ -p 50070:50070 \
+ -p 50075:50075 \
+ -h quickstart.cloudera --privileged=true \
+ agitated_payne_backup /usr/bin/docker-quickstart;
+```
+
+### 3. Verify running CDH.
+
+You can see each application web UI for HDFS on `http://:50070/`, YARN on `http://:8088/cluster`.
+
+
+### 4. Configure Spark interpreter in Zeppelin
+Set following configurations to `conf/zeppelin-env.sh`.
+
+```
+export MASTER=yarn-client
+export HADOOP_CONF_DIR=[your_hadoop_conf_path]
+export SPARK_HOME=[your_spark_home_path]
+```
+
+`HADOOP_CONF_DIR`(Hadoop configuration path) is defined in `/scripts/docker/spark-cluster-managers/cdh/hdfs_conf`.
+
+Don't forget to set Spark `master` as `yarn-client` in Zeppelin **Interpreters** setting page like below.
+
+
+
+### 5. Run Zeppelin with Spark interpreter
+After running a single paragraph with Spark interpreter in Zeppelin,
+
+
+
+
+
+You can browse `http://:8088/cluster/apps` for Zeppelin application is running well or not.
+
+
diff --git a/docs/install/spark_cluster_mode.md b/docs/install/spark_cluster_mode.md
index c734ea66dea..d4c864a91c4 100644
--- a/docs/install/spark_cluster_mode.md
+++ b/docs/install/spark_cluster_mode.md
@@ -202,65 +202,3 @@ After running a single paragraph with Spark interpreter in Zeppelin, browse `htt
-
-## Zeppelin on CDH
-Cloudera officially provide docker container so we can easily build CDH docker environment following the [link](https://www.cloudera.com/documentation/enterprise/5-6-x/topics/quickstart_docker_container.html).
-
-
-### 1. Run docker
-
-```
-docker run -it \
- -p 80:80 \
- -p 4040:4040 \
- -p 8020:8020 \
- -p 8022:8022 \
- -p 8030:8030 \
- -p 8032:8032 \
- -p 8033:8033 \
- -p 8040:8040 \
- -p 8042:8042 \
- -p 8088:8088 \
- -p 8480:8480 \
- -p 8485:8485 \
- -p 8888:8888 \
- -p 9083:9083 \
- -p 10020:10020 \
- -p 10033:10033 \
- -p 18088:18088 \
- -p 19888:19888 \
- -p 25000:25000 \
- -p 25010:25010 \
- -p 25020:25020 \
- -p 50010:50010 \
- -p 50020:50020 \
- -p 50070:50070 \
- -p 50075:50075 \
- -h quickstart.cloudera --privileged=true \
- agitated_payne_backup /usr/bin/docker-quickstart;
-```
-
-### 2. Verify running CDH.
-
-You can see each application web UI for HDFS on `http://:50070/`, YARN on `http://:8088/cluster`.
-
-
-### 3. Configure Spark interpreter in Zeppelin
-Set following configurations to `conf/zeppelin-env.sh`.
-
-```
-export MASTER=yarn-client
-export HADOOP_CONF_DIR=[your_hadoop_conf_path]
-export SPARK_HOME=[your_spark_home_path]
-```
-
-`HADOOP_CONF_DIR`(Hadoop configuration path) is defined in `/scripts/docker/spark-cluster-managers/cdh/hdfs_conf`.
-
-Don't forget to set Spark `master` as `yarn-client` in Zeppelin **Interpreters** setting page like below.
-
-
-
-### 4. Run Zeppelin with Spark interpreter
-After running a single paragraph with Spark interpreter in Zeppelin, browse `http://:8088/cluster/apps` and check Zeppelin application is running well or not.
-
-