SPARK-45959. added new tests. Handled flattening of Project when done… #261
build_main.yml
on: push
Run
/
Check changes
34s
Run
/
Protobuf breaking change detection and Python CodeGen check
1m 3s
Run
/
Run TPC-DS queries with SF=1
38m 2s
Run
/
Run Docker integration tests
25m 31s
Run
/
Run Spark on Kubernetes Integration test
37m 58s
Run
/
Run Spark UI tests
19s
Matrix: Run / build
Matrix: Run / java-other-versions
Run
/
Build modules: sparkr
27m 24s
Run
/
Linters, licenses, dependencies and documentation generation
25m 41s
Matrix: Run / pyspark
Annotations
48 errors and 10 warnings
Run / Linters, licenses, dependencies and documentation generation
Process completed with exit code 1.
|
Run / Build modules: pyspark-sql, pyspark-resource, pyspark-testing
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-sql, pyspark-resource, pyspark-testing
The operation was canceled.
|
Run / Build modules: pyspark-sql, pyspark-resource, pyspark-testing
Value cannot be null. (Parameter 'ContainerId')
|
Run / Build modules: pyspark-sql, pyspark-resource, pyspark-testing
Value cannot be null. (Parameter 'ContainerId')
|
Run / Build modules: pyspark-pandas
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-pandas
Process completed with exit code 19.
|
Run / Java 17 build with Maven
The run was canceled by @ahshahid.
|
Run / Java 17 build with Maven
The operation was canceled.
|
Run / Build modules: api, catalyst, hive-thriftserver
The run was canceled by @ahshahid.
|
Run / Build modules: api, catalyst, hive-thriftserver
The operation was canceled.
|
Run / Run Spark on Kubernetes Integration test
The run was canceled by @ahshahid.
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-95c9848c6e3d6e68-exec-1".
|
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-b81d1e8c6e3e4b5c-exec-1".
|
Run / Run Spark on Kubernetes Integration test
sleep interrupted
|
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$685/0x00007f84685c8460@4bd2faf4 rejected from java.util.concurrent.ThreadPoolExecutor@1306b841[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 311]
|
Run / Run Spark on Kubernetes Integration test
sleep interrupted
|
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$685/0x00007f84685c8460@7a22b666 rejected from java.util.concurrent.ThreadPoolExecutor@1306b841[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 312]
|
Run / Run Spark on Kubernetes Integration test
The operation was canceled.
|
Run / Build modules: pyspark-pandas-connect-part3
The run was canceled by @ahshahid.
|
Run / Build modules: hive - other tests
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-pandas-connect-part3
The operation was canceled.
|
Run / Build modules: hive - other tests
The operation was canceled.
|
Run / Build modules: hive - slow tests
The run was canceled by @ahshahid.
|
Run / Build modules: hive - slow tests
The operation was canceled.
|
Run / Build modules: pyspark-pandas-connect-part2
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-pandas-connect-part2
The operation was canceled.
|
Run / Run TPC-DS queries with SF=1
The run was canceled by @ahshahid.
|
Run / Run TPC-DS queries with SF=1
The operation was canceled.
|
Run / Build modules: pyspark-pandas-connect-part0
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-pandas-connect-part0
The operation was canceled.
|
Run / Build modules: sql - extended tests
The run was canceled by @ahshahid.
|
|
|
Run / Build modules: sql - extended tests
The operation was canceled.
|
Run / Build modules: pyspark-pandas-slow
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-pandas-slow
The operation was canceled.
|
Run / Build modules: pyspark-mllib, pyspark-ml, pyspark-ml-connect
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-mllib, pyspark-ml, pyspark-ml-connect
The operation was canceled.
|
Run / Build modules: sql - slow tests
The run was canceled by @ahshahid.
|
Run / Build modules: sql - slow tests
The operation was canceled.
|
Run / Build modules: pyspark-pandas-connect-part1
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-pandas-connect-part1
The operation was canceled.
|
Run / Build modules: sql - other tests
The run was canceled by @ahshahid.
|
Run / Build modules: sql - other tests
The operation was canceled.
|
Run / Build modules: pyspark-connect
The run was canceled by @ahshahid.
|
Run / Build modules: pyspark-connect
The operation was canceled.
|
python/pyspark/pandas/tests/test_frame_spark.py.test_hint:
python/pyspark/pandas/tests/test_frame_spark.py#L1
Column value_x#443L are ambiguous. It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to one of the Datasets but Spark is unable to figure out which one. Please alias the Datasets with different names via `Dataset.as` before joining them, and specify the column using qualified name, e.g. `df.as("a").join(df.as("b"), $"a.id" > $"b.id")`. You can also set spark.sql.analyzer.failAmbiguousSelfJoin to false to disable this check.
JVM stacktrace:
org.apache.spark.sql.AnalysisException: Column value_x#443L are ambiguous. It's probably because you joined several Datasets together, and some of these Datasets are the same. This column points to one of the Datasets but Spark is unable to figure out which one. Please alias the Datasets with different names via `Dataset.as` before joining them, and specify the column using qualified name, e.g. `df.as("a").join(df.as("b"), $"a.id" > $"b.id")`. You can also set spark.sql.analyzer.failAmbiguousSelfJoin to false to disable this check.
at org.apache.spark.sql.errors.QueryCompilationErrors$.ambiguousAttributesInSelfJoinError(QueryCompilationErrors.scala:1986)
at org.apache.spark.sql.execution.analysis.DetectAmbiguousSelfJoin$.apply(DetectAmbiguousSelfJoin.scala:161)
at org.apache.spark.sql.execution.analysis.DetectAmbiguousSelfJoin$.apply(DetectAmbiguousSelfJoin.scala:45)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:183)
at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:179)
at scala.collection.immutable.List.foldLeft(List.scala:79)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
at scala.collection.immutable.List.foreach(List.scala:333)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:225)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:221)
at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:177)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:221)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:192)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:89)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:213)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:212)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:88)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:230)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:230)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:918)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:229)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:88)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:85)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:69)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:93)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:918)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:91)
at org.apache.spark.sql.Dataset.withPlan(Dataset.scala:4474)
at org.apache.spark.sql.Dataset.$anonfun$select$1(Dataset.scala:1581)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83)
at org.apache.spark.sql.package$.withOrigin(package.scala:110)
at org.apache.spark.sql.Dataset.select(Dataset.scala:1557)
at jdk.internal.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:840)
|
Run / Build modules: pyspark-core, pyspark-streaming
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Run / Build modules: pyspark-errors
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-connect-part3
No files were found with the provided path: **/target/unit-tests.log. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-connect-part3
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-connect-part2
No files were found with the provided path: **/target/unit-tests.log. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-connect-part2
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-connect-part0
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-connect-part0
No files were found with the provided path: **/target/unit-tests.log. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-slow
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Run / Build modules: pyspark-pandas-slow
No files were found with the provided path: **/target/unit-tests.log. No artifacts will be uploaded.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
spark-on-kubernetes-it-log
Expired
|
394 KB |
|
test-results-api, catalyst, hive-thriftserver--17-hadoop3-hive2.3
Expired
|
160 KB |
|
test-results-core, unsafe, kvstore, avro, utils, network-common, network-shuffle, repl, launcher, examples, sketch--17-hadoop3-hive2.3
Expired
|
133 KB |
|
test-results-docker-integration--17-hadoop3-hive2.3
Expired
|
119 KB |
|
test-results-hive-- other tests-17-hadoop3-hive2.3
Expired
|
546 KB |
|
test-results-hive-- slow tests-17-hadoop3-hive2.3
Expired
|
564 KB |
|
test-results-mllib-local, mllib, graphx--17-hadoop3-hive2.3
Expired
|
1.32 MB |
|
test-results-pyspark-connect--17-hadoop3-hive2.3
Expired
|
408 KB |
|
test-results-pyspark-mllib, pyspark-ml, pyspark-ml-connect--17-hadoop3-hive2.3
Expired
|
468 KB |
|
test-results-pyspark-pandas--17-hadoop3-hive2.3
Expired
|
883 KB |
|
test-results-pyspark-pandas-connect-part1--17-hadoop3-hive2.3
Expired
|
193 KB |
|
test-results-sparkr--17-hadoop3-hive2.3
Expired
|
280 KB |
|
test-results-sql-- extended tests-17-hadoop3-hive2.3
Expired
|
714 KB |
|
test-results-sql-- other tests-17-hadoop3-hive2.3
Expired
|
1.88 MB |
|
test-results-sql-- slow tests-17-hadoop3-hive2.3
Expired
|
1.22 MB |
|
test-results-streaming, sql-kafka-0-10, streaming-kafka-0-10, streaming-kinesis-asl, yarn, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf--17-hadoop3-hive2.3
Expired
|
682 KB |
|
test-results-tpcds--17-hadoop3-hive2.3
Expired
|
21.8 KB |
|
unit-tests-log-api, catalyst, hive-thriftserver--17-hadoop3-hive2.3
Expired
|
153 MB |
|
unit-tests-log-hive-- other tests-17-hadoop3-hive2.3
Expired
|
76.1 MB |
|
unit-tests-log-hive-- slow tests-17-hadoop3-hive2.3
Expired
|
86.3 MB |
|
unit-tests-log-pyspark-connect--17-hadoop3-hive2.3
Expired
|
1.85 GB |
|
unit-tests-log-pyspark-mllib, pyspark-ml, pyspark-ml-connect--17-hadoop3-hive2.3
Expired
|
118 MB |
|
unit-tests-log-pyspark-pandas--17-hadoop3-hive2.3
Expired
|
166 MB |
|
unit-tests-log-pyspark-pandas-connect-part1--17-hadoop3-hive2.3
Expired
|
1.2 GB |
|
unit-tests-log-sql-- extended tests-17-hadoop3-hive2.3
Expired
|
153 MB |
|
unit-tests-log-sql-- other tests-17-hadoop3-hive2.3
Expired
|
192 MB |
|
unit-tests-log-sql-- slow tests-17-hadoop3-hive2.3
Expired
|
281 MB |
|
unit-tests-log-streaming, sql-kafka-0-10, streaming-kafka-0-10, streaming-kinesis-asl, yarn, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf--17-hadoop3-hive2.3
Expired
|
394 MB |
|
unit-tests-log-tpcds--17-hadoop3-hive2.3
Expired
|
21.1 MB |
|