Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom JDBC column types #220

Closed
wants to merge 3 commits into from

Conversation

marctrem
Copy link
Contributor

This patch allows us to set custom column types.
Please tell me if you want some edits on it.

Thank you,
Marc

@JoshRosen
Copy link
Contributor

Seems reasonable to me; do you mind adding a simple test which exercises this code path?

Signed-off-by: Marc-André Tremblay <marcandre.tr@gmail.com>
@marctrem
Copy link
Contributor Author

marctrem commented Jul 17, 2016

Sorry for the delay.

I based the test on the existing "maxlength" feature test.

case _ => throw new IllegalArgumentException(s"Don't know how to save $field to JDBC")
val typ: String = if (field.metadata.contains("redshift_type")) {
field.metadata.getString("redshift_type")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor style nit: place the else on the same line as this brace.

@JoshRosen JoshRosen added this to the 2.0.0 milestone Jul 17, 2016
@JoshRosen
Copy link
Contributor

LGTM. I'll try to fix the style issue and merge conflicts myself.

@JoshRosen JoshRosen changed the title Custom jdbc column types Custom JDBC column types Jul 17, 2016
@JoshRosen JoshRosen closed this in c2dd7bb Jul 17, 2016
nrstott pushed a commit to nrstott/spark-redshift that referenced this pull request Aug 1, 2016
Author: Marc-André Tremblay <marcandre.tr@gmail.com>

This patch had conflicts when merged, resolved by
Committer: Josh Rosen <joshrosen@databricks.com>

Closes databricks#220 from marctrem/custom-jdbc-column-types.
@gatorsmile
Copy link

@JoshRosen We are facing the same issue in the general JDBC data source of Spark. What do you think if we do it in Spark too?

@JoshRosen
Copy link
Contributor

@gatorsmile, I wouldn't be opposed.

@gatorsmile
Copy link

Thanks!

@gatorsmile
Copy link

@JoshRosen We submitted two solutions:

I am wondering which solution you prefer? It sounds like the first one is more user friendly.

val mdb = new MetadataBuilder()
mdb.putString("name", "VARCHAR(128)”)
mdb.putString("comments”, “CLOB(20K)”)
val createTableColTypes = mdb.build().json
df.write.option("createTableColumnTypes", createTableColTypes).jdbc(url, "TEST.DBCOLTYPETEST", properties)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants