forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 5
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-938][doc] Add OpenStack Swift support
See compiled doc at http://people.apache.org/~rxin/tmp/openstack-swift/_site/storage-openstack-swift.html This is based on apache#1010. Closes apache#1010. Author: Reynold Xin <rxin@apache.org> Author: Gil Vernik <gilv@il.ibm.com> Closes apache#2298 from rxin/openstack-swift and squashes the following commits: ff4e394 [Reynold Xin] Two minor comments from Patrick. 279f6de [Reynold Xin] core-sites -> core-site dfb8fea [Reynold Xin] Updated based on Gil's suggestion. 846f5cb [Reynold Xin] Added a link from overview page. 0447c9f [Reynold Xin] Removed sample code. e9c3761 [Reynold Xin] Merge pull request apache#1010 from gilv/master 9233fef [Gil Vernik] Fixed typos 6994827 [Gil Vernik] Merge pull request #1 from rxin/openstack ac0679e [Reynold Xin] Fixed an unclosed tr. 47ce99d [Reynold Xin] Merge branch 'master' into openstack cca7192 [Gil Vernik] Removed white spases from pom.xml 99f095d [Reynold Xin] Pending openstack changes. eb22295 [Reynold Xin] Merge pull request apache#1010 from gilv/master 39a9737 [Gil Vernik] Spark integration with Openstack Swift c977658 [Gil Vernik] Merge branch 'master' of https://github.com/gilv/spark 2aba763 [Gil Vernik] Fix to docs/openstack-integration.md 9b625b5 [Gil Vernik] Merge branch 'master' of https://github.com/gilv/spark eff538d [Gil Vernik] SPARK-938 - Openstack Swift object storage support ce483d7 [Gil Vernik] SPARK-938 - Openstack Swift object storage support b6c37ef [Gil Vernik] Openstack Swift support
- Loading branch information
Showing
2 changed files
with
154 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,152 @@ | ||
--- | ||
layout: global | ||
title: Accessing OpenStack Swift from Spark | ||
--- | ||
|
||
Spark's support for Hadoop InputFormat allows it to process data in OpenStack Swift using the | ||
same URI formats as in Hadoop. You can specify a path in Swift as input through a | ||
URI of the form <code>swift://container.PROVIDER/path</code>. You will also need to set your | ||
Swift security credentials, through <code>core-site.xml</code> or via | ||
<code>SparkContext.hadoopConfiguration</code>. | ||
Current Swift driver requires Swift to use Keystone authentication method. | ||
|
||
# Configuring Swift for Better Data Locality | ||
|
||
Although not mandatory, it is recommended to configure the proxy server of Swift with | ||
<code>list_endpoints</code> to have better data locality. More information is | ||
[available here](https://github.com/openstack/swift/blob/master/swift/common/middleware/list_endpoints.py). | ||
|
||
|
||
# Dependencies | ||
|
||
The Spark application should include <code>hadoop-openstack</code> dependency. | ||
For example, for Maven support, add the following to the <code>pom.xml</code> file: | ||
|
||
{% highlight xml %} | ||
<dependencyManagement> | ||
... | ||
<dependency> | ||
<groupId>org.apache.hadoop</groupId> | ||
<artifactId>hadoop-openstack</artifactId> | ||
<version>2.3.0</version> | ||
</dependency> | ||
... | ||
</dependencyManagement> | ||
{% endhighlight %} | ||
|
||
|
||
# Configuration Parameters | ||
|
||
Create <code>core-site.xml</code> and place it inside Spark's <code>conf</code> directory. | ||
There are two main categories of parameters that should to be configured: declaration of the | ||
Swift driver and the parameters that are required by Keystone. | ||
|
||
Configuration of Hadoop to use Swift File system achieved via | ||
|
||
<table class="table"> | ||
<tr><th>Property Name</th><th>Value</th></tr> | ||
<tr> | ||
<td>fs.swift.impl</td> | ||
<td>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</td> | ||
</tr> | ||
</table> | ||
|
||
Additional parameters required by Keystone (v2.0) and should be provided to the Swift driver. Those | ||
parameters will be used to perform authentication in Keystone to access Swift. The following table | ||
contains a list of Keystone mandatory parameters. <code>PROVIDER</code> can be any name. | ||
|
||
<table class="table"> | ||
<tr><th>Property Name</th><th>Meaning</th><th>Required</th></tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.auth.url</code></td> | ||
<td>Keystone Authentication URL</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.auth.endpoint.prefix</code></td> | ||
<td>Keystone endpoints prefix</td> | ||
<td>Optional</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.tenant</code></td> | ||
<td>Tenant</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.username</code></td> | ||
<td>Username</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.password</code></td> | ||
<td>Password</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.http.port</code></td> | ||
<td>HTTP port</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.region</code></td> | ||
<td>Keystone region</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
<tr> | ||
<td><code>fs.swift.service.PROVIDER.public</code></td> | ||
<td>Indicates if all URLs are public</td> | ||
<td>Mandatory</td> | ||
</tr> | ||
</table> | ||
|
||
For example, assume <code>PROVIDER=SparkTest</code> and Keystone contains user <code>tester</code> with password <code>testing</code> | ||
defined for tenant <code>test</code>. Then <code>core-site.xml</code> should include: | ||
|
||
{% highlight xml %} | ||
<configuration> | ||
<property> | ||
<name>fs.swift.impl</name> | ||
<value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.auth.url</name> | ||
<value>http://127.0.0.1:5000/v2.0/tokens</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.auth.endpoint.prefix</name> | ||
<value>endpoints</value> | ||
</property> | ||
<name>fs.swift.service.SparkTest.http.port</name> | ||
<value>8080</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.region</name> | ||
<value>RegionOne</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.public</name> | ||
<value>true</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.tenant</name> | ||
<value>test</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.username</name> | ||
<value>tester</value> | ||
</property> | ||
<property> | ||
<name>fs.swift.service.SparkTest.password</name> | ||
<value>testing</value> | ||
</property> | ||
</configuration> | ||
{% endhighlight %} | ||
|
||
Notice that | ||
<code>fs.swift.service.PROVIDER.tenant</code>, | ||
<code>fs.swift.service.PROVIDER.username</code>, | ||
<code>fs.swift.service.PROVIDER.password</code> contains sensitive information and keeping them in | ||
<code>core-site.xml</code> is not always a good approach. | ||
We suggest to keep those parameters in <code>core-site.xml</code> for testing purposes when running Spark | ||
via <code>spark-shell</code>. | ||
For job submissions they should be provided via <code>sparkContext.hadoopConfiguration</code>. |