From 5d0ee7690da2f3a5073b97fd47530a570aea2f8c Mon Sep 17 00:00:00 2001 From: Shuai Lin Date: Wed, 18 Jan 2017 00:08:54 +0800 Subject: [PATCH 1/3] Improved the example commands in running-on-k8s document. --- docs/running-on-kubernetes.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md index 5192d9d086618..98ec83141fe0d 100644 --- a/docs/running-on-kubernetes.md +++ b/docs/running-on-kubernetes.md @@ -31,16 +31,16 @@ For example, if the registry host is `registry-host` and the registry is listeni Kubernetes applications can be executed via `spark-submit`. For example, to compute the value of pi, assuming the images are set up as described above: - bin/spark-submit - --deploy-mode cluster - --class org.apache.spark.examples.SparkPi - --master k8s://https://: - --kubernetes-namespace default - --conf spark.executor.instances=5 - --conf spark.app.name=spark-pi - --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest - --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest - examples/jars/spark_2.11-2.2.0.jar + bin/spark-submit \ + --deploy-mode cluster \ + --class org.apache.spark.examples.SparkPi \ + --master k8s://https://: \ + --kubernetes-namespace default \ + --conf spark.executor.instances=5 \ + --conf spark.app.name=spark-pi \ + --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \ + --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \ + examples/jars/spark_examples_2.11-2.2.0.jar The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting From 76789a6d8f8aef0648626143ba0d7b4f816c3894 Mon Sep 17 00:00:00 2001 From: Shuai Lin Date: Wed, 18 Jan 2017 01:14:53 +0800 Subject: [PATCH 2/3] Fixed more example commands. --- docs/running-on-kubernetes.md | 66 +++++++++++++++++------------------ 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md index 98ec83141fe0d..d11bd6efa5e31 100644 --- a/docs/running-on-kubernetes.md +++ b/docs/running-on-kubernetes.md @@ -75,54 +75,54 @@ examples of providing application dependencies. To submit an application with both the main resource and two other jars living on the submitting user's machine: - bin/spark-submit - --deploy-mode cluster - --class com.example.applications.SampleApplication - --master k8s://https://192.168.99.100 - --kubernetes-namespace default - --upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar - --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest - --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest + bin/spark-submit \ + --deploy-mode cluster \ + --class com.example.applications.SampleApplication \ + --master k8s://https://192.168.99.100 \ + --kubernetes-namespace default \ + --upload-jars /home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar \ + --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \ + --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \ /home/exampleuser/exampleapplication/main.jar Note that since passing the jars through the `--upload-jars` command line argument is equivalent to setting the `spark.kubernetes.driver.uploads.jars` Spark property, the above will behave identically to this command: - bin/spark-submit - --deploy-mode cluster - --class com.example.applications.SampleApplication - --master k8s://https://192.168.99.100 - --kubernetes-namespace default - --conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar - --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest - --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest + bin/spark-submit \ + --deploy-mode cluster \ + --class com.example.applications.SampleApplication \ + --master k8s://https://192.168.99.100 \ + --kubernetes-namespace default \ + --conf spark.kubernetes.driver.uploads.jars=/home/exampleuser/exampleapplication/dep1.jar,/home/exampleuser/exampleapplication/dep2.jar \ + --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver:latest \ + --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \ /home/exampleuser/exampleapplication/main.jar To specify a main application resource that can be downloaded from an HTTP service, and if a plugin for that application is located in the jar `/opt/spark-plugins/app-plugin.jar` on the docker image's disk: - bin/spark-submit - --deploy-mode cluster - --class com.example.applications.PluggableApplication - --master k8s://https://192.168.99.100 - --kubernetes-namespace default - --jars /opt/spark-plugins/app-plugin.jar - --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest - --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest + bin/spark-submit \ + --deploy-mode cluster \ + --class com.example.applications.PluggableApplication \ + --master k8s://https://192.168.99.100 \ + --kubernetes-namespace default \ + --jars /opt/spark-plugins/app-plugin.jar \ + --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \ + --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \ http://example.com:8080/applications/sparkpluggable/app.jar Note that since passing the jars through the `--jars` command line argument is equivalent to setting the `spark.jars` Spark property, the above will behave identically to this command: - bin/spark-submit - --deploy-mode cluster - --class com.example.applications.PluggableApplication - --master k8s://https://192.168.99.100 - --kubernetes-namespace default - --conf spark.jars=file:///opt/spark-plugins/app-plugin.jar - --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest - --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest - http://example.com:8080/applications/sparkpluggable/app.jar + bin/spark-submit \ + --deploy-mode cluster \ + --class com.example.applications.PluggableApplication \ + --master k8s://https://192.168.99.100 \ + --kubernetes-namespace default \ + --conf spark.jars=file:///opt/spark-plugins/app-plugin.jar \ + --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \ + --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \ + http://example.com:8080/applications/sparkpluggable/app.jar \ ### Spark Properties From 617f7a36d3befb4e2d8a6285eddd892a624cb518 Mon Sep 17 00:00:00 2001 From: Shuai Lin Date: Wed, 18 Jan 2017 01:16:37 +0800 Subject: [PATCH 3/3] Fixed typo. --- docs/running-on-kubernetes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md index d11bd6efa5e31..234c9870548c7 100644 --- a/docs/running-on-kubernetes.md +++ b/docs/running-on-kubernetes.md @@ -122,7 +122,7 @@ Spark property, the above will behave identically to this command: --conf spark.jars=file:///opt/spark-plugins/app-plugin.jar \ --conf spark.kubernetes.driver.docker.image=registry-host:5000/spark-driver-custom:latest \ --conf spark.kubernetes.executor.docker.image=registry-host:5000/spark-executor:latest \ - http://example.com:8080/applications/sparkpluggable/app.jar \ + http://example.com:8080/applications/sparkpluggable/app.jar ### Spark Properties