-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-23158] [SQL] Move HadoopFsRelationTest test suites to from sql/hive to sql/core #20331
Conversation
} | ||
} | ||
|
||
class HiveOrcHadoopFsRelationSuite extends OrcHadoopFsRelationSuite { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is Hive only. Thus, create a separate file for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
spark.range(0, 10).write | ||
.orc(file.getCanonicalPath) | ||
val expectedCompressionKind = | ||
OrcFileOperator.getFileReader(file.getCanonicalPath).get.getCompression |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OrcFileOperator
is defined in sql\hive
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gatorsmile . This test case should be tested on native
implementation, too.
HiveOrcHadoopFsRelationSuite
test coverage is only hive
implementation.
assert(maybeOrcFile.isDefined) | ||
val orcFilePath = maybeOrcFile.get.toPath.toString | ||
val expectedCompressionKind = | ||
OrcFileOperator.getFileReader(orcFilePath).get.getCompression |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same here.
retest this please |
Test build #86392 has finished for PR 20331 at commit
|
Test build #86393 has finished for PR 20331 at commit
|
Test build #86402 has finished for PR 20331 at commit
|
retest this please |
Test build #86403 has finished for PR 20331 at commit
|
Test build #86412 has finished for PR 20331 at commit
|
cc @cloud-fan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep the test coverage for the following two test cases for native
ORCFileFormat.
test("SPARK-13543: Support for specifying compression codec for ORC via option()")
test("Default compression codec is snappy for ORC compression")
spark.range(0, 10).write | ||
.orc(file.getCanonicalPath) | ||
val expectedCompressionKind = | ||
OrcFileOperator.getFileReader(file.getCanonicalPath).get.getCompression |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gatorsmile . This test case should be tested on native
implementation, too.
HiveOrcHadoopFsRelationSuite
test coverage is only hive
implementation.
val df = | ||
spark.read.format(dataSourceName).option("multiLine", true).schema(schema).load(path) | ||
checkAnswer(df, Row(null, expected)) | ||
withSQLConf(SQLConf.MAX_RECORDS_PER_FILE.key -> "2") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test will fail if SQLConf.MAX_RECORDS_PER_FILE.key
is less than 2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the default value won't be less than 2, we don't need to be so careful...
Please remember this comment during next update, @gatorsmile . Thanks. |
What changes were proposed in this pull request?
The test suites that extend HadoopFsRelationTest are not in sql/hive packages, but their directories are in sql/hive. We should move them to sql/core.
How was this patch tested?
The existing tests.