-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-23094][SPARK-23723][SPARK-23724][SQL][FOLLOW-UP] Support custom encoding for json files #21254
Conversation
cc @MaxGekk @HyukjinKwon Do we have any behavior change after the previous PR: #20937? |
Nope, I am quite sure that we don't have any kind of hidden behaviour change. Both |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding the test seems ok if you feel in that way. this might have to be removed soon within the next release since we should allow this case anyway.
withTempPath { path => | ||
val ds = spark.createDataset(Seq( | ||
("a", 1), ("b", 2), ("c", 3)) | ||
).repartition(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't have to repartition though.
Test build #90292 has finished for PR 21254 at commit
|
The PR brought the As @HyukjinKwon wrote above the PR #21247 eliminates restrictions in write but the restrictions don't break previous behavior (before #20937) in any case. |
retest this please |
Test build #90347 has finished for PR 21254 at commit
|
Merged to master. |
What changes were proposed in this pull request?
This is to add a test case to check the behaviors when users write json in the specified UTF-16/UTF-32 encoding with multiline off.
How was this patch tested?
N/A