The connector supports reading Google Cloud Spanner tables and graphs into Spark DataFrames and GraphFrames.
This Readme may include documentation for changes that haven't been released yet. The latest release's documentation and source code are found here.
https://github.com/GoogleCloudDataproc/spark-spanner-connector/blob/master/README.md
Follow the instructions to create a project or Spanner table if you don't have an existing one.
If you do not have an Apache Spark environment you can create a Cloud Dataproc cluster with pre-configured auth. The following examples assume you are using Cloud Dataproc, but you can use spark-submit
on any cluster.
Any Dataproc cluster using the API needs the 'Spanner' or 'cloud-platform' scopes. Dataproc clusters don't have the 'spanner' scope by default, but you can create a cluster with the scope. For example:
MY_CLUSTER=...
gcloud dataproc clusters create "$MY_CLUSTER" --scopes https://www.googleapis.com/auth/cloud-platform
If you run a Spark job on the Dataproc cluster, you'll have to assign corresponding Spanner permission to the Dataproc VM service account. If you choose to use Dataproc Serverless, you'll have to make sure the Serverless service account has the permission.
You can find the released jar file from the Releases tag on right of the github page. The name pattern is spark-3.1-spanner-x.x.x.jar. The 3.1 indicates the driver depends on the Spark 3.1 and x.x.x is the Spark Spanner connector version. The alternative way is to use gs://spark-lib/spanner/spark-3.1-spanner-1.1.0.jar
directly.
Connector \ Spark | 2.3 | 2.4 (Scala 2.11) |
2.4 (Scala 2.12) |
3.0 | 3.1 | 3.2 | 3.3 | 3.4 | 3.5 |
---|---|---|---|---|---|---|---|---|---|
spark-3.1-spanner | ✓ | ✓ | ✓ | ✓ | ✓ |
Connector \ Dataproc Image | 1.3 | 1.4 | 1.5 | 2.0 | 2.1 | 2.2 | Serverless Image 1.0 |
Serverless Image 2.0 |
Serverless Image 2.1 |
---|---|---|---|---|---|---|---|---|---|
spark-3.1-spanner | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
The connector is not available on the Maven Central yet.
You can use the standard --jars
or --packages
(or alternatively, the spark.jars
/spark.jars.packages
configuration) to specify the Spark Spanner connector. For example:
gcloud dataproc jobs submit pyspark --cluster "$MY_CLUSTER" \
--jars=gs://spark-lib/spanner/spark-3.1-spanner-1.1.0.jar \
--region us-central1 examples/SpannerSpark.py
The connector supports exporting both tables and graphs from Spanner. It uses the cross language Spark SQL Data Source API to communicate with the Spanner Java library.
This is an example of using Python code to connect to a Spanner table. You can find more examples or documentations on the usage.
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Spanner Connect App').getOrCreate()
df = spark.read.format('cloud-spanner') \
.option("projectId", "$YourProjectId") \
.option("instanceId", "$YourInstanceId") \
.option("databaseId", "$YourDatabaseId") \
.option("table", "$YourTable") \
.load()
df.show()
For support of other languages, you can refer to Scala, Java, and R. You can also refer to Scala, Java, and R about how to submit a job for other languages.
Here are the options supported in the Spark Spanner connector for reading tables.
Variable | Validation | Comments |
---|---|---|
projectId | String | The projectID containing the Cloud Spanner database |
instanceId | String | The instanceID of the Cloud Spanner database |
databaseId | String | The databaseID of the Cloud Spanner database |
table | String | The Table of the Cloud Spanner database that you are reading from |
enableDataboost | Boolean | Enable the Data Boost, which provides independent compute resources to query Spanner with near-zero impact to existing workloads. Note the option may trigger extra charge. |
To export Spanner Graphs,
please use the Python class SpannerGraphConnector
included in the jar.
The connector supports exporting the graph into separate node and edge DataFrames, and exporting the graph into GraphFrames directly.
This is an example of exporting a graph from Spanner as a GraphFrame:
from pyspark.sql import SparkSession
spark = (SparkSession.builder.appName("spanner-graphframe-graphx-example")
.config("spark.jars.packages", "graphframes:graphframes:0.8.4-spark3.5-s_2.12")
.config("spark.jars", path_to_connector_jar)
.getOrCreate())
spark.sparkContext.addPyFile(path_to_connector_jar)
from spannergraph import SpannerGraphConnector
connector = (SpannerGraphConnector()
.spark(spark)
.project("$YourProjectId")
.instance("$YourInstanceId")
.database("$YourDatabaseId")
.graph("$YourGraphId"))
g = connector.load_graph()
g.vertices.show()
g.edges.show()
To export node and edge DataFrames instead of GraphFrames, please use
load_dfs
instead:
df_vertices, df_edges, df_id_map = connector.load_dfs()
While Spanner Graph allows nodes to be identified with more than one element key, many libraries for processing graphs, including GraphFrames, expect only one ID field, ideally integers.
When node IDs are not integers, the connector assigns a unique integer ID to
each row in node tables and maps node keys in edge tables to integer IDs with
DataFrame joins by default. Please use load_graph_and_mapping
or load_dfs
to retrieve the mapping when loading a graph:
g, df_id_map = connector.load_graph_and_mapping()
or
df_vertices, df_edges, df_id_map = connector.load_dfs()
If you do not want to let the connector perform this mapping, please specify
.export_string_ids(True)
to let the connector output string concatenations of
table IDs (generated by the connector based on the graph schema) and element
keys directly. The format of the concatenated strings is
{table_id}@{key_1}|{key_2}|{key_3}|...
, where element keys joined with |
as
the separator, and \
being used as the escape character. For example, the
string ID of a node with table ID 1
and keys (a, b|b, c\c)
will be
1@a|b\|b|c\\c
.
Here is a summary of the options supported by the graph connector.
Please refer to the API documentation of
SpannerGraphConnector
for details.
Option | Summary of Purpose |
---|---|
spark | The spark session to read graph to |
project | ID of the Google Cloud project containing the graph |
instance | ID of the Spanner instance containing the graph |
database | ID of the Spanner database containing the graph |
graph | Name of the graph as defined in the database schema |
Option | Summary of Purpose | Default |
---|---|---|
data_boost | Enable Data Boost | Disabled |
partition_size_bytes | The partitionSizeBytes hint for Spanner | No hint provided |
repartition | Enable repartitioning of node and edge DataFrames and set the target number of partitions | No repartitioning |
read_timestamp | The timestamp of the snapshot to read from | Read the snapshot at the time when load is called |
symmetrize_graph | Symmetrizes the output graph by adding reverse edges | No symmetrization |
export_string_ids | Output string concatenations of the element keys instead of assigning integer IDs and performing joins | Output integer IDs |
node_label / edge_label | Specify label element filters, additional properties to fetch, and element-wise property filters (details below) | Export all nodes and edges and no element property |
node_query / edge_query | Overwrite the queries used to fetch nodes and edges (details below) | Use queries generated by the connector |
You can choose to include only graph elements with specific labels by providing
node_label
and/or edge_label
options. node_label
and edge_label
can also
be used to specify element properties to include in the output and additional
element-wise filters (i.e., WHERE clauses). The columns for the returned
properties will be prefixed with "property_" to avoid naming conflicts (e.g.,
when fetching a property named "id").
To fetch additional properties or specify an element-wise filter without
performing any filtering by label, please use "*"
to match any label. Other
label filters of the same type (node/edge) cannot be used if a "*"
label
filter is specified for that type.
This example fetches all nodes with their "name" property, all "KNOWS" edges with their "SingerId" and "FriendId" properties, and all "CREATES_MUSIC" edges with a release date after 1900-01-01:
connector = (connector
.node_label("*", properties=["name"])
.edge_label("KNOWS", properties=["SingerId", "FriendId"])
.edge_label("CREATES_MUSIC", where="release_date > '1900-01-01'"))
In addition to letting the connector generate queries to read nodes and edges
from Spanner, you can provide your own GQL queries with node_query
and
edge_query
to fetch the node and edge tables, with some restrictions:
- The queries must be root-partitionable.
- The output columns must meet the following conditions:
- A column in the node DataFrame is named "id". This column will be used to identify nodes.
- A column in the edge DataFrame is named "src". This column will be used to identify source nodes.
- A column in the edge DataFrame is named "dst". This column will be used to identify destination nodes.
This example provides custom GQL queries to fetch the node and edge tables of the graph:
node_query = "SELECT * FROM GRAPH_TABLE " \
"(MusicGraph MATCH (n:SINGER) RETURN n.id AS id)"
edge_query = "SELECT * FROM GRAPH_TABLE " \
"(MusicGraph MATCH -[e:KNOWS]-> " \
"RETURN e.SingerId AS src, e.FriendId AS dst)"
connector = (connector
.node_query(node_query)
.edge_query(edge_query))
Currently, the graph connector expects source_key and destination_key of an Edge to match the node_element_key of the referenced source and destination Node respectively (Element Definition). For example, if an edge table E references a node table N as source nodes, and N has a 2-part compound [node_c1, node_c2] as its node_element_key, the source_key of E must also be a 2-part compound [edge_c1, edge_c2]. A partial match, e.g. source_key = [edge_c1], can logically form a hypergraph and is not supported.
Here are the mappings for supported Spanner data types.
Spanner GoogleSql Type | Spark Data Type | Notes |
---|---|---|
ARRAY | ArrayType | Nested ARRAY is not supported, e.g. ARRAY<ARRAY>. |
BOOL | BooleanType | |
BYTES | BinaryType | |
DATE | DateType | The date range is [1700-01-01, 9999-12-31]. |
FLOAT64 | DoubleType | |
INT64 | LongType | The supported integer range is [-9,223,372,036,854,775,808, 9,223,372,036,854,775,807] |
JSON | StringType | Spark has no JSON type. The values are read as String. |
NUMERIC | DecimalType | The NUMERIC will be converted to DecimalType with 38 precision and 9 scale, which is the same as the Spanner definition. |
STRING | StringType | |
TIMESTAMP | TimestampType | Only microseconds will be converted to Spark timestamp type. The range of timestamp is [0001-01-01 00:00:00, 9999-12-31 23:59:59.999999] |
The connector automatically computes column and pushdown filters the DataFrame's SELECT
statement e.g.
df.select("word")
.where("word = 'Hamlet' or word = 'Claudius'")
.collect()
filters to the column word
and pushed down the predicate filter word = 'hamlet' or word = 'Claudius'
. Note filters containing ArrayType column is not pushed down.
Filter pushdown is currently not supported when exporting graphs.
When Data Boost is enabled, the usage can be monitored by using Cloud Monitoring. The page explains how to do that step by step. The usage cannot be grouped by the Spark job id though.
Dataproc web interface can be used to debug especially to tune the performance. On the YARN Application Timeline
page, it displays the execution timeline details for the executors and other functions. You can assign more workers if there are many tasks assigned to a same executor.
When DataBoost is enabled, all queries that are fed into Cloud Spanner must be root-partionable. Please see Read data in parallel
for more details. If you encounter an issue related to partitioning when using this connector, it is probably that the table being read from is not supported.
The connector supports the Spanner PostgreSQL interface-enabled databases.
Spanner PostgreSql Type | Spark Data Type | Notes |
---|---|---|
array | ArrayType | Nested array is not supported. |
bool / boolean | BooleanType | |
bytea | BinaryType | |
date | DateType | The date range is [1700-01-01, 9999-12-31]. |
double precision / float8 | DoubleType | |
int8 / bigint | LongType | The supported integer range is [-9,223,372,036,854,775,808, 9,223,372,036,854,775,807] |
jsonb | StringType | Spark has no JSON type. The values are read as String. |
numeric / decimal | DecimalType | The NUMERIC will be converted to DecimalType with 38 precision and 9 scale, which is the same as the Spanner definition. |
varchar / text / character varying | StringType | |
timestamptz/timestamp with time zone | TimestampType | Only microseconds will be converted to Spark timestamp type. The range of timestamp is [0001-01-01 00:00:00, 9999-12-31 23:59:59.999999] |
Since jsonb is converted to StringType in Spark, a filter containing jsonb column can only be pushed down as a string filter. For the jsonb column, IN
filter is not pushdown to Cloud Spanner.
Filters containing array column will not be pushed down.