[SPARK-44936][CORE] Simplify the log when Spark HybridStore hits the … #32
build_main.yml
on: push
Run
/
Check changes
41s
Run
/
Breaking change detection with Buf (branch-3.5)
1m 1s
Run
/
Scala 2.13 build with SBT
18m 38s
Run
/
Run TPC-DS queries with SF=1
1h 9m
Run
/
Run Docker integration tests
47m 14s
Run
/
Run Spark on Kubernetes Integration test
1h 24m
Matrix: Run / build
Matrix: Run / java-11-17
Run
/
Build modules: sparkr
50m 36s
Run
/
Linters, licenses, dependencies and documentation generation
2h 13m
Matrix: Run / pyspark
Annotations
21 errors and 1 warning
|
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-d8905d8a24c44910-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-3c9d2e8a24c57e1a-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-8263cf8a24c9c395-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=pods, name=spark-test-app-6f5fb605c1254eeb9b0053a7f7d99a11-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "spark-test-app-6f5fb605c1254eeb9b0053a7f7d99a11-driver" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={})..
|
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-1a0aef8a24e044f6-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-5e961c8a24e17fe7-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Set() did not contain "decomtest-f513788a24e5b40e-exec-1".
|
Run / Run Spark on Kubernetes Integration test
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=pods, name=spark-test-app-40cea814d7ac4ba9b187f3cc499c59b6-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "spark-test-app-40cea814d7ac4ba9b187f3cc499c59b6-driver" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={})..
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-afffd30e-3ae6-480c-98c5-9b33b60fd798/state/0/1 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-afffd30e-3ae6-480c-98c5-9b33b60fd798/state/0/1 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-afffd30e-3ae6-480c-98c5-9b33b60fd798/state/0/0 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-afffd30e-3ae6-480c-98c5-9b33b60fd798/state/0/3 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-afffd30e-3ae6-480c-98c5-9b33b60fd798/state/0/2 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-72e8f329-0810-4d4c-9a19-69a13117a339/state/0/1 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-3c73b952-99e7-4439-990b-82706ef62043/state/0/0 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-afffd30e-3ae6-480c-98c5-9b33b60fd798/state/0/3 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-72e8f329-0810-4d4c-9a19-69a13117a339/state/0/4 does not exist
|
Run / Build modules: sql - slow tests
File file:/home/runner/work/spark/spark/target/tmp/spark-3c73b952-99e7-4439-990b-82706ef62043/state/0/1 does not exist
|
Run / Build modules: core, unsafe, kvstore, avro, network-common, network-shuffle, repl, launcher, examples, sketch, graphx
java.lang.InterruptedException
|
KafkaMicroBatchV1SourceWithAdminSuite.compositeReadLimit:
KafkaMicroBatchV1SourceWithAdminSuite#L1
org.scalatest.exceptions.TestFailedException:
== Results ==
!== Correct Answer - 52 == == Spark Answer - 39 ==
struct<value:int> struct<value:int>
[100] [100]
[101] [101]
[102] [102]
[103] [103]
[104] [104]
[105] [105]
[106] [106]
[107] [107]
[108] [108]
[109] [109]
[10] [10]
[110] [110]
[111] [111]
[112] [112]
[113] [113]
[114] [114]
[115] [115]
[116] [116]
[117] [117]
[118] [118]
[119] [119]
[11] [11]
[120] [120]
[121] [121]
![122] [12]
![123] [13]
![124] [14]
![125] [15]
![126] [16]
![127] [17]
![128] [18]
![12] [19]
![13] [1]
![14] [20]
![15] [21]
![16] [22]
![17] [23]
![18] [24]
![19] [2]
![1]
![20]
![21]
![22]
![23]
![24]
![25]
![26]
![27]
![28]
![29]
![2]
![30]
== Progress ==
StartStream(ProcessingTimeTrigger(100),org.apache.spark.sql.streaming.util.StreamManualClock@1982b947,Map(),null)
AssertOnQuery(<condition>, )
CheckAnswer: [1],[10],[100],[101],[102],[103],[104],[105],[106],[107],[11],[108],[109],[110],[111],[12],[13],[14],[15]
AdvanceManualClock(100)
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$anonfun$advanceSystemClock$1$1@588efd6f
AssertOnQuery(<condition>, )
CheckNewAnswer:
Assert(<condition>, )
AdvanceManualClock(100)
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$anonfun$advanceSystemClock$1$1@2015b01f
AssertOnQuery(<condition>, )
CheckAnswer: [1],[10],[100],[101],[102],[103],[104],[105],[106],[107],[11],[108],[109],[110],[111],[112],[113],[114],[115],[116],[12],[117],[118],[119],[120],[121],[13],[14],[15],[16],[17],[18],[19],[2],[20],[21],[22],[23],[24]
AdvanceManualClock(100)
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$anonfun$advanceSystemClock$1$1@55a66e9a
AssertOnQuery(<condition>, )
CheckNewAnswer:
AdvanceManualClock(100)
org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$anonfun$advanceSystemClock$1$1@1cfcd8e1
AssertOnQuery(<condition>, )
=> CheckAnswer: [1],[10],[100],[101],[102],[103],[104],[105],[106],[107],[11],[108],[109],[110],[111],[112],[113],[114],[115],[116],[117],[118],[119],[12],[120],[121],[122],[123],[124],[125],[126],[127],[128],[13],[14],[15],[16],[17],[18],[19],[2],[20],[21],[22],[23],[24],[25],[26],[27],[28],[29],[30]
== Stream ==
Output Mode: Append
Stream state: {KafkaSourceV1[Subscribe[topic-46]]: {"topic-46":{"2":2,"1":15,"0":22}}}
Thread state: alive
Thread stack trace: java.lang.Object.wait(Native Method)
org.apache.spark.util.ManualClock.waitTillTime(ManualClock.scala:67)
org.apache.spark.sql.streaming.util.StreamManualClock.waitTillTime(StreamManualClock.scala:34)
org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:76)
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:239)
org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:311)
org.apache.spark.sql.execution.streaming.StreamExecution$$Lambda$5467/1364495330.apply$mcV$sp(Unknown Source)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:289)
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$1(StreamExecution.scala:211)
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1$$Lambda$5462/1151169484.apply$mcV$sp(Unknown Source)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:211)
== Sink ==
0: [10] [11] [12] [13] [14] [15] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [1]
1: [16] [17] [18] [19] [20] [21] [22] [23] [24] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [2]
== Plan ==
== Parsed Logical Plan ==
WriteToMicroBatchDataSource MemorySink, 86ea20c4-46bc-47f2-9b5c-7e3671741f0c, Append, 1
+- SerializeFromObject [input[0, int, false] AS value#20100]
+- MapElements org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$Lambda$6538/1603768763@240db0eb, class scala.Tuple2, [StructField(_1,StringType,true), StructField(_2,StringType,true)], obj#20099: int
+- DeserializeToObject newInstance(class scala.Tuple2), obj#20098: scala.Tuple2
+- Project [cast(key#20074 as string) AS key#20088, cast(value#20075 as string) AS value#20089]
+- Project [key#20146 AS key#20074, value#20147 AS value#20075, topic#20148 AS topic#20076, partition#20149 AS partition#20077, offset#20150L AS offset#20078L, timestamp#20151 AS timestamp#20079, timestampType#20152 AS timestampType#20080]
+- LogicalRDD [key#20146, value#20147, topic#20148, partition#20149, offset#20150L, timestamp#20151, timestampType#20152], true
== Analyzed Logical Plan ==
WriteToMicroBatchDataSource MemorySink, 86ea20c4-46bc-47f2-9b5c-7e3671741f0c, Append, 1
+- SerializeFromObject [input[0, int, false] AS value#20100]
+- MapElements org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$Lambda$6538/1603768763@240db0eb, class scala.Tuple2, [StructField(_1,StringType,true), StructField(_2,StringType,true)], obj#20099: int
+- DeserializeToObject newInstance(class scala.Tuple2), obj#20098: scala.Tuple2
+- Project [cast(key#20074 as string) AS key#20088, cast(value#20075 as string) AS value#20089]
+- Project [key#20146 AS key#20074, value#20147 AS value#20075, topic#20148 AS topic#20076, partition#20149 AS partition#20077, offset#20150L AS offset#20078L, timestamp#20151 AS timestamp#20079, timestampType#20152 AS timestampType#20080]
+- LogicalRDD [key#20146, value#20147, topic#20148, partition#20149, offset#20150L, timestamp#20151, timestampType#20152], true
== Optimized Logical Plan ==
WriteToDataSourceV2 MicroBatchWrite[epoch: 1, writer: org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@51fdaf74]
+- SerializeFromObject [input[0, int, false] AS value#20100]
+- MapElements org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$Lambda$6538/1603768763@240db0eb, class scala.Tuple2, [StructField(_1,StringType,true), StructField(_2,StringType,true)], obj#20099: int
+- DeserializeToObject newInstance(class scala.Tuple2), obj#20098: scala.Tuple2
+- Project [cast(key#20146 as string) AS key#20088, cast(value#20147 as string) AS value#20089]
+- LogicalRDD [key#20146, value#20147, topic#20148, partition#20149, offset#20150L, timestamp#20151, timestampType#20152], true
== Physical Plan ==
WriteToDataSourceV2 MicroBatchWrite[epoch: 1, writer: org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@51fdaf74], org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy$$Lambda$5584/980300870@a991a96
+- *(1) SerializeFromObject [input[0, int, false] AS value#20100]
+- *(1) MapElements org.apache.spark.sql.kafka010.KafkaMicroBatchSourceSuiteBase$$Lambda$6538/1603768763@240db0eb, obj#20099: int
+- *(1) DeserializeToObject newInstance(class scala.Tuple2), obj#20098: scala.Tuple2
+- *(1) Project [cast(key#20146 as string) AS key#20088, cast(value#20147 as string) AS value#20089]
+- *(1) Scan ExistingRDD kafka[key#20146,value#20147,topic#20148,partition#20149,offset#20150L,timestamp#20151,timestampType#20152]
|
Run / Build modules: pyspark-errors
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
site
Expired
|
58.2 MB |
|
test-results-catalyst, hive-thriftserver--8-hadoop3-hive2.3
Expired
|
3.01 MB |
|
test-results-core, unsafe, kvstore, avro, network-common, network-shuffle, repl, launcher, examples, sketch, graphx--8-hadoop3-hive2.3
Expired
|
2.79 MB |
|
test-results-docker-integration--8-hadoop3-hive2.3
Expired
|
133 KB |
|
test-results-hive-- other tests-8-hadoop3-hive2.3
Expired
|
1.1 MB |
|
test-results-hive-- slow tests-8-hadoop3-hive2.3
Expired
|
943 KB |
|
test-results-pyspark-connect--8-hadoop3-hive2.3
Expired
|
362 KB |
|
test-results-pyspark-core, pyspark-streaming--8-hadoop3-hive2.3
Expired
|
78.2 KB |
|
test-results-pyspark-mllib, pyspark-ml, pyspark-ml-connect--8-hadoop3-hive2.3
Expired
|
550 KB |
|
test-results-pyspark-pandas--8-hadoop3-hive2.3
Expired
|
1.08 MB |
|
test-results-pyspark-pandas-connect--8-hadoop3-hive2.3
Expired
|
1.52 MB |
|
test-results-pyspark-pandas-slow--8-hadoop3-hive2.3
Expired
|
1.44 MB |
|
test-results-pyspark-pandas-slow-connect--8-hadoop3-hive2.3
Expired
|
864 KB |
|
test-results-pyspark-sql, pyspark-resource, pyspark-testing--8-hadoop3-hive2.3
Expired
|
364 KB |
|
test-results-sparkr--8-hadoop3-hive2.3
Expired
|
279 KB |
|
test-results-sql-- extended tests-8-hadoop3-hive2.3
Expired
|
3.37 MB |
|
test-results-sql-- other tests-8-hadoop3-hive2.3
Expired
|
4.66 MB |
|
test-results-sql-- slow tests-8-hadoop3-hive2.3
Expired
|
3.17 MB |
|
test-results-streaming, sql-kafka-0-10, streaming-kafka-0-10, mllib-local, mllib, yarn, mesos, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf--8-hadoop3-hive2.3
Expired
|
273 KB |
|
test-results-tpcds--8-hadoop3-hive2.3
Expired
|
22.6 KB |
|
unit-tests-log-streaming, sql-kafka-0-10, streaming-kafka-0-10, mllib-local, mllib, yarn, mesos, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf--8-hadoop3-hive2.3
Expired
|
269 MB |
|