Skip to content
Animesh Trivedi edited this page Jun 15, 2017 · 4 revisions

Paruqet-readOnly-log

Parquet-readOnly-debug-log-executor

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0-SNAPSHOT
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_91)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 17/01/23 13:38:37 17902 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #39
17/01/23 13:38:37 17903 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #39
17/01/23 13:38:37 17903 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
:load apps/jars-atr/bench-join.scala
Loading apps/jars-atr/bench-join.scala...
import org.apache.spark.sql.{SaveMode, SparkSession}
import org.apache.spark.sql.{SparkSession, SaveMode}
import org.apache.spark.storage.StorageLevel
17/01/23 13:38:38 18904 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #40
17/01/23 13:38:38 18904 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #40
17/01/23 13:38:38 18904 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 0ms
import org.apache.spark.sql.Dataset
import scala.collection.mutable.StringBuilder
import org.apache.spark.sql.types._
s: Long = 1554475426810353
Loading apps/jars-atr/saveDF.scala...
import org.apache.spark.sql.{SparkSession, SaveMode}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.sql.Dataset
saveDF: (data: org.apache.spark.sql.Dataset[_], tabName: String, doNullOutput: Boolean)Unit
17/01/23 13:38:39 19905 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #41
17/01/23 13:38:39 19905 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #41
17/01/23 13:38:39 19905 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 0ms
hdfs: String = ""
crail: String = crail://flex11-40g0:9060
fs: String = ""
doNull: Boolean = false
suffix: String = ""
f1: String = /sql/parquet-100m
f2: String = /sql/parquet-100m2
fo: String = /p-o
17/01/23 13:38:40 20907 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #42
17/01/23 13:38:40 20908 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #42
17/01/23 13:38:40 20908 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
sch: org.apache.spark.sql.types.StructType = StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))
17/01/23 13:38:40 21250 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #43
17/01/23 13:38:40 21250 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #43
17/01/23 13:38:40 21250 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/01/23 13:38:40 21253 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #44
17/01/23 13:38:40 21253 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #44
17/01/23 13:38:40 21253 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/01/23 13:38:40 21261 DEBUG PartitioningAwareFileIndex:  paths.size : 1 discoveryThreshold: 32
17/01/23 13:38:40 21263 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #45
17/01/23 13:38:40 21264 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #45
17/01/23 13:38:40 21264 DEBUG ProtobufRpcEngine: Call: getListing took 1ms
17/01/23 13:38:40 21269 DEBUG PartitioningAwareFileIndex:  paths.size : 0 discoveryThreshold: 32
17/01/23 13:38:40 21272 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #46
17/01/23 13:38:40 21272 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #46
17/01/23 13:38:40 21272 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:40 21275 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #47
17/01/23 13:38:40 21276 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #47
17/01/23 13:38:40 21276 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:40 21277 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #48
17/01/23 13:38:40 21277 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #48
17/01/23 13:38:40 21277 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21278 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #49
17/01/23 13:38:41 21278 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #49
17/01/23 13:38:41 21279 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21279 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #50
17/01/23 13:38:41 21279 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #50
17/01/23 13:38:41 21280 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21282 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #51
17/01/23 13:38:41 21283 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #51
17/01/23 13:38:41 21283 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21284 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #52
17/01/23 13:38:41 21284 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #52
17/01/23 13:38:41 21284 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21285 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #53
17/01/23 13:38:41 21285 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #53
17/01/23 13:38:41 21285 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21286 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #54
17/01/23 13:38:41 21286 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #54
17/01/23 13:38:41 21286 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21287 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #55
17/01/23 13:38:41 21287 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #55
17/01/23 13:38:41 21287 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21303 DEBUG PartitioningAwareFileIndex:  paths.size : 1 discoveryThreshold: 32
17/01/23 13:38:41 21303 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #56
17/01/23 13:38:41 21304 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #56
17/01/23 13:38:41 21304 DEBUG ProtobufRpcEngine: Call: getListing took 1ms
17/01/23 13:38:41 21304 DEBUG PartitioningAwareFileIndex:  paths.size : 0 discoveryThreshold: 32
17/01/23 13:38:41 21305 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #57
17/01/23 13:38:41 21305 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #57
17/01/23 13:38:41 21305 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21306 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #58
17/01/23 13:38:41 21306 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #58
17/01/23 13:38:41 21306 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21307 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #59
17/01/23 13:38:41 21308 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #59
17/01/23 13:38:41 21308 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21309 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #60
17/01/23 13:38:41 21309 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #60
17/01/23 13:38:41 21309 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21310 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #61
17/01/23 13:38:41 21310 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #61
17/01/23 13:38:41 21310 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21311 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #62
17/01/23 13:38:41 21311 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #62
17/01/23 13:38:41 21311 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21312 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #63
17/01/23 13:38:41 21312 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #63
17/01/23 13:38:41 21312 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21313 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #64
17/01/23 13:38:41 21313 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #64
17/01/23 13:38:41 21313 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21314 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #65
17/01/23 13:38:41 21314 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #65
17/01/23 13:38:41 21314 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21315 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #66
17/01/23 13:38:41 21315 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #66
17/01/23 13:38:41 21315 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21654 DEBUG SessionState$$anon$1: 
=== Result of Batch Resolution ===
!'DeserializeToObject unresolveddeserializer(createexternalrow(getcolumnbyordinal(0, IntegerType), getcolumnbyordinal(1, LongType), getcolumnbyordinal(2, DoubleType), getcolumnbyordinal(3, FloatType), getcolumnbyordinal(4, StringType).toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))), obj#10: org.apache.spark.sql.Row   DeserializeToObject createexternalrow(randInt#0, randLong#1L, randDouble#2, randFloat#3, randString#4.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), obj#10: org.apache.spark.sql.Row
 +- LocalRelation <empty>, [randInt#0, randLong#1L, randDouble#2, randFloat#3, randString#4]                                                                                                                                                                                                                                                                                                                                                                                                   +- LocalRelation <empty>, [randInt#0, randLong#1L, randDouble#2, randFloat#3, randString#4]
        
ds1: org.apache.spark.sql.DataFrame = [randInt: int, randLong: bigint ... 3 more fields]
17/01/23 13:38:41 21826 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #67
17/01/23 13:38:41 21827 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #67
17/01/23 13:38:41 21828 DEBUG ProtobufRpcEngine: Call: getFileInfo took 2ms
17/01/23 13:38:41 21828 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #68
17/01/23 13:38:41 21829 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #68
17/01/23 13:38:41 21829 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/01/23 13:38:41 21830 DEBUG PartitioningAwareFileIndex:  paths.size : 1 discoveryThreshold: 32
17/01/23 13:38:41 21830 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #69
17/01/23 13:38:41 21830 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #69
17/01/23 13:38:41 21831 DEBUG ProtobufRpcEngine: Call: getListing took 1ms
17/01/23 13:38:41 21831 DEBUG PartitioningAwareFileIndex:  paths.size : 0 discoveryThreshold: 32
17/01/23 13:38:41 21832 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #70
17/01/23 13:38:41 21832 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #70
17/01/23 13:38:41 21832 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21833 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #71
17/01/23 13:38:41 21833 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #71
17/01/23 13:38:41 21833 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21834 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #72
17/01/23 13:38:41 21834 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #72
17/01/23 13:38:41 21834 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21835 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #73
17/01/23 13:38:41 21836 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #73
17/01/23 13:38:41 21836 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21836 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #74
17/01/23 13:38:41 21836 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #74
17/01/23 13:38:41 21837 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21837 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #75
17/01/23 13:38:41 21837 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #75
17/01/23 13:38:41 21837 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21838 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #76
17/01/23 13:38:41 21838 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #76
17/01/23 13:38:41 21838 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21839 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #77
17/01/23 13:38:41 21839 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #77
17/01/23 13:38:41 21839 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21840 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #78
17/01/23 13:38:41 21840 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #78
17/01/23 13:38:41 21840 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21840 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #79
17/01/23 13:38:41 21841 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #79
17/01/23 13:38:41 21841 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21842 DEBUG PartitioningAwareFileIndex:  paths.size : 1 discoveryThreshold: 32
17/01/23 13:38:41 21842 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #80
17/01/23 13:38:41 21843 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #80
17/01/23 13:38:41 21843 DEBUG ProtobufRpcEngine: Call: getListing took 1ms
17/01/23 13:38:41 21843 DEBUG PartitioningAwareFileIndex:  paths.size : 0 discoveryThreshold: 32
17/01/23 13:38:41 21843 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #81
17/01/23 13:38:41 21844 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #81
17/01/23 13:38:41 21844 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21844 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #82
17/01/23 13:38:41 21844 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #82
17/01/23 13:38:41 21844 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21845 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #83
17/01/23 13:38:41 21845 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #83
17/01/23 13:38:41 21845 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21846 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #84
17/01/23 13:38:41 21846 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #84
17/01/23 13:38:41 21846 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21846 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #85
17/01/23 13:38:41 21847 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #85
17/01/23 13:38:41 21847 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21847 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #86
17/01/23 13:38:41 21848 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #86
17/01/23 13:38:41 21848 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21848 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #87
17/01/23 13:38:41 21849 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #87
17/01/23 13:38:41 21849 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 1ms
17/01/23 13:38:41 21849 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #88
17/01/23 13:38:41 21849 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #88
17/01/23 13:38:41 21849 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21850 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #89
17/01/23 13:38:41 21850 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #89
17/01/23 13:38:41 21850 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21851 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #90
17/01/23 13:38:41 21851 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #90
17/01/23 13:38:41 21851 DEBUG ProtobufRpcEngine: Call: getBlockLocations took 0ms
17/01/23 13:38:41 21861 DEBUG SessionState$$anon$1: 
=== Result of Batch Resolution ===
!'DeserializeToObject unresolveddeserializer(createexternalrow(getcolumnbyordinal(0, IntegerType), getcolumnbyordinal(1, LongType), getcolumnbyordinal(2, DoubleType), getcolumnbyordinal(3, FloatType), getcolumnbyordinal(4, StringType).toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))), obj#21: org.apache.spark.sql.Row   DeserializeToObject createexternalrow(randInt#11, randLong#12L, randDouble#13, randFloat#14, randString#15.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), obj#21: org.apache.spark.sql.Row
 +- LocalRelation <empty>, [randInt#11, randLong#12L, randDouble#13, randFloat#14, randString#15]                                                                                                                                                                                                                                                                                                                                                                                              +- LocalRelation <empty>, [randInt#11, randLong#12L, randDouble#13, randFloat#14, randString#15]
        
ds2: org.apache.spark.sql.DataFrame = [randInt: int, randLong: bigint ... 3 more fields]
17/01/23 13:38:41 21909 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #91
17/01/23 13:38:41 21909 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #91
17/01/23 13:38:41 21910 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:41 22183 DEBUG SessionState$$anon$1: 
=== Result of Batch Resolution ===
!'DeserializeToObject unresolveddeserializer(newInstance(class scala.Tuple2)), obj#24: scala.Tuple2   DeserializeToObject newInstance(class scala.Tuple2), obj#24: scala.Tuple2
 +- LocalRelation <empty>, [_1#22, _2#23]                                                             +- LocalRelation <empty>, [_1#22, _2#23]
        
bla: org.apache.spark.sql.Dataset[(org.apache.spark.sql.Row, org.apache.spark.sql.Row)] = [_1: struct<randInt: int, randLong: bigint ... 3 more fields>, _2: struct<randInt: int, randLong: bigint ... 3 more fields>]
17/01/23 13:38:42 22338 DEBUG SessionState$$anon$1: 
=== Result of Batch Resolution ===
!'DeserializeToObject unresolveddeserializer(createexternalrow(if (isnull(getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)))) null else createexternalrow(if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._0, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._1, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._2, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._3, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._4.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), if (isnull(getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)))) null else createexternalrow(if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._0, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._1, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._2, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._3, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._4.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), StructField(_1,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false), StructField(_2,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false))), obj#27: org.apache.spark.sql.Row   DeserializeToObject createexternalrow(if (isnull(_1#22)) null else createexternalrow(if (_1#22.isNullAt) null else _1#22.randInt, if (_1#22.isNullAt) null else _1#22.randLong, if (_1#22.isNullAt) null else _1#22.randDouble, if (_1#22.isNullAt) null else _1#22.randFloat, if (_1#22.isNullAt) null else _1#22.randString.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), if (isnull(_2#23)) null else createexternalrow(if (_2#23.isNullAt) null else _2#23.randInt, if (_2#23.isNullAt) null else _2#23.randLong, if (_2#23.isNullAt) null else _2#23.randDouble, if (_2#23.isNullAt) null else _2#23.randFloat, if (_2#23.isNullAt) null else _2#23.randString.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), StructField(_1,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false), StructField(_2,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false)), obj#27: org.apache.spark.sql.Row
 +- LocalRelation <empty>, [_1#22, _2#23]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   +- LocalRelation <empty>, [_1#22, _2#23]
        
17/01/23 13:38:42 22505 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #92
17/01/23 13:38:42 22505 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #92
17/01/23 13:38:42 22506 DEBUG ProtobufRpcEngine: Call: getFileInfo took 2ms
17/01/23 13:38:42 22506 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #93
17/01/23 13:38:42 22507 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #93
17/01/23 13:38:42 22507 DEBUG ProtobufRpcEngine: Call: getFileInfo took 1ms
17/01/23 13:38:42 22508 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #94
17/01/23 13:38:42 22509 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #94
17/01/23 13:38:42 22509 DEBUG ProtobufRpcEngine: Call: delete took 1ms
17/01/23 13:38:42 22529 DEBUG SessionState$$anon$1: 
=== Result of Batch Resolution ===
!'DeserializeToObject unresolveddeserializer(createexternalrow(if (isnull(getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)))) null else createexternalrow(if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._0, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._1, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._2, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._3, if (getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(0, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._4.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), if (isnull(getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)))) null else createexternalrow(if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._0, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._1, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._2, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._3, if (getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)).isNullAt) null else getcolumnbyordinal(1, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true))._4.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), StructField(_1,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false), StructField(_2,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false))), obj#30: org.apache.spark.sql.Row   DeserializeToObject createexternalrow(if (isnull(_1#22)) null else createexternalrow(if (_1#22.isNullAt) null else _1#22.randInt, if (_1#22.isNullAt) null else _1#22.randLong, if (_1#22.isNullAt) null else _1#22.randDouble, if (_1#22.isNullAt) null else _1#22.randFloat, if (_1#22.isNullAt) null else _1#22.randString.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), if (isnull(_2#23)) null else createexternalrow(if (_2#23.isNullAt) null else _2#23.randInt, if (_2#23.isNullAt) null else _2#23.randLong, if (_2#23.isNullAt) null else _2#23.randDouble, if (_2#23.isNullAt) null else _2#23.randFloat, if (_2#23.isNullAt) null else _2#23.randString.toString, StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)), StructField(_1,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false), StructField(_2,StructType(StructField(randInt,IntegerType,true), StructField(randLong,LongType,true), StructField(randDouble,DoubleType,true), StructField(randFloat,FloatType,true), StructField(randString,StringType,true)),false)), obj#30: org.apache.spark.sql.Row
 +- LocalRelation <empty>, [_1#22, _2#23]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   +- LocalRelation <empty>, [_1#22, _2#23]
        
17/01/23 13:38:42 22547 INFO ParquetFileFormat: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter
17/01/23 13:38:42 22601 DEBUG ExtractEquiJoinKeys: Considering join on: Some((_1#22.randInt = _2#23.randInt))
17/01/23 13:38:42 22602 DEBUG ExtractEquiJoinKeys: leftKeys:List(_1#22.randInt) | rightKeys:List(_2#23.randInt)
17/01/23 13:38:42 22610 DEBUG ExtractEquiJoinKeys: Considering join on: Some((_1#22.randInt = _2#23.randInt))
17/01/23 13:38:42 22610 DEBUG ExtractEquiJoinKeys: leftKeys:List(_1#22.randInt) | rightKeys:List(_2#23.randInt)
17/01/23 13:38:42 22611 DEBUG ExtractEquiJoinKeys: Considering join on: Some((_1#22.randInt = _2#23.randInt))
17/01/23 13:38:42 22611 DEBUG ExtractEquiJoinKeys: leftKeys:List(_1#22.randInt) | rightKeys:List(_2#23.randInt)
17/01/23 13:38:42 22611 DEBUG ExtractEquiJoinKeys: Considering join on: Some((_1#22.randInt = _2#23.randInt))
17/01/23 13:38:42 22612 DEBUG ExtractEquiJoinKeys: leftKeys:List(_1#22.randInt) | rightKeys:List(_2#23.randInt)
17/01/23 13:38:42 22612 DEBUG ExtractEquiJoinKeys: Considering join on: Some((_1#22.randInt = _2#23.randInt))
17/01/23 13:38:42 22612 DEBUG ExtractEquiJoinKeys: leftKeys:List(_1#22.randInt) | rightKeys:List(_2#23.randInt)
17/01/23 13:38:42 22615 INFO FileSourceStrategy: Pruning directories with: 
17/01/23 13:38:42 22617 INFO FileSourceStrategy: Post-Scan Filters: 
17/01/23 13:38:42 22618 INFO FileSourceStrategy: Output Data Schema: struct<randInt: int, randLong: bigint, randDouble: double, randFloat: float, randString: string ... 3 more fields>
17/01/23 13:38:42 22619 INFO FileSourceStrategy: Pushed Filters: 
17/01/23 13:38:42 22626 INFO FileSourceStrategy: Pruning directories with: 
17/01/23 13:38:42 22626 INFO FileSourceStrategy: Post-Scan Filters: 
17/01/23 13:38:42 22627 INFO FileSourceStrategy: Output Data Schema: struct<randInt: int, randLong: bigint, randDouble: double, randFloat: float, randString: string ... 3 more fields>
17/01/23 13:38:42 22627 INFO FileSourceStrategy: Pushed Filters: 
17/01/23 13:38:42 22659 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/01/23 13:38:42 22660 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/01/23 13:38:42 22660 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/01/23 13:38:42 22660 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/01/23 13:38:42 22660 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/01/23 13:38:42 22663 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
17/01/23 13:38:42 22664 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
17/01/23 13:38:42 22665 DEBUG DFSClient: /p-o/_temporary/0: masked=rwxr-xr-x
17/01/23 13:38:42 22666 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #95
17/01/23 13:38:42 22667 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #95
17/01/23 13:38:42 22667 DEBUG ProtobufRpcEngine: Call: mkdirs took 2ms
17/01/23 13:38:42 22712 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=22, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=27136
17/01/23 13:38:42 22712 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 36774 lastFlushOffset 27481
17/01/23 13:38:42 22712 DEBUG DFSClient: Queued packet 22
17/01/23 13:38:42 22712 DEBUG DFSClient: Waiting for ack for: 22
17/01/23 13:38:42 22712 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:22 offsetInBlock:27136 lastPacketInBlock:false lastByteOffsetInBlock: 36774
17/01/23 13:38:42 22713 DEBUG DFSClient: DFSClient seqno: 22 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:42 22733 DEBUG WholeStageCodegenExec: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator smj_leftInput;
/* 009 */   private scala.collection.Iterator smj_rightInput;
/* 010 */   private InternalRow smj_leftRow;
/* 011 */   private InternalRow smj_rightRow;
/* 012 */   private int smj_value4;
/* 013 */   private java.util.ArrayList smj_matches;
/* 014 */   private int smj_value5;
/* 015 */   private InternalRow smj_value6;
/* 016 */   private org.apache.spark.sql.execution.metric.SQLMetric smj_numOutputRows;
/* 017 */   private UnsafeRow smj_result;
/* 018 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder smj_holder;
/* 019 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter smj_rowWriter;
/* 020 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter smj_rowWriter1;
/* 021 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter smj_rowWriter2;
/* 022 */
/* 023 */   public GeneratedIterator(Object[] references) {
/* 024 */     this.references = references;
/* 025 */   }
/* 026 */
/* 027 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 028 */     partitionIndex = index;
/* 029 */     this.inputs = inputs;
/* 030 */     smj_leftInput = inputs[0];
/* 031 */     smj_rightInput = inputs[1];
/* 032 */
/* 033 */     smj_rightRow = null;
/* 034 */
/* 035 */     smj_matches = new java.util.ArrayList();
/* 036 */
/* 037 */     this.smj_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 038 */     smj_result = new UnsafeRow(2);
/* 039 */     this.smj_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(smj_result, 64);
/* 040 */     this.smj_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(smj_holder, 2);
/* 041 */     this.smj_rowWriter1 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(smj_holder, 5);
/* 042 */     this.smj_rowWriter2 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(smj_holder, 5);
/* 043 */
/* 044 */   }
/* 045 */
/* 046 */   private boolean findNextInnerJoinRows(
/* 047 */     scala.collection.Iterator leftIter,
/* 048 */     scala.collection.Iterator rightIter) {
/* 049 */     smj_leftRow = null;
/* 050 */     int comp = 0;
/* 051 */     while (smj_leftRow == null) {
/* 052 */       if (!leftIter.hasNext()) return false;
/* 053 */       smj_leftRow = (InternalRow) leftIter.next();
/* 054 */
/* 055 */       InternalRow smj_value1 = smj_leftRow.getStruct(0, 5);
/* 056 */       boolean smj_isNull = false;
/* 057 */       int smj_value = -1;
/* 058 */
/* 059 */       if (smj_value1.isNullAt(0)) {
/* 060 */         smj_isNull = true;
/* 061 */       } else {
/* 062 */         smj_value = smj_value1.getInt(0);
/* 063 */       }
/* 064 */       if (smj_isNull) {
/* 065 */         smj_leftRow = null;
/* 066 */         continue;
/* 067 */       }
/* 068 */       if (!smj_matches.isEmpty()) {
/* 069 */         comp = 0;
/* 070 */         if (comp == 0) {
/* 071 */           comp = (smj_value > smj_value5 ? 1 : smj_value < smj_value5 ? -1 : 0);
/* 072 */         }
/* 073 */
/* 074 */         if (comp == 0) {
/* 075 */           return true;
/* 076 */         }
/* 077 */         smj_matches.clear();
/* 078 */       }
/* 079 */
/* 080 */       do {
/* 081 */         if (smj_rightRow == null) {
/* 082 */           if (!rightIter.hasNext()) {
/* 083 */             smj_value5 = smj_value;
/* 084 */             return !smj_matches.isEmpty();
/* 085 */           }
/* 086 */           smj_rightRow = (InternalRow) rightIter.next();
/* 087 */
/* 088 */           InternalRow smj_value3 = smj_rightRow.getStruct(0, 5);
/* 089 */           boolean smj_isNull2 = false;
/* 090 */           int smj_value2 = -1;
/* 091 */
/* 092 */           if (smj_value3.isNullAt(0)) {
/* 093 */             smj_isNull2 = true;
/* 094 */           } else {
/* 095 */             smj_value2 = smj_value3.getInt(0);
/* 096 */           }
/* 097 */           if (smj_isNull2) {
/* 098 */             smj_rightRow = null;
/* 099 */             continue;
/* 100 */           }
/* 101 */           smj_value4 = smj_value2;
/* 102 */         }
/* 103 */
/* 104 */         comp = 0;
/* 105 */         if (comp == 0) {
/* 106 */           comp = (smj_value > smj_value4 ? 1 : smj_value < smj_value4 ? -1 : 0);
/* 107 */         }
/* 108 */
/* 109 */         if (comp > 0) {
/* 110 */           smj_rightRow = null;
/* 111 */         } else if (comp < 0) {
/* 112 */           if (!smj_matches.isEmpty()) {
/* 113 */             smj_value5 = smj_value;
/* 114 */             return true;
/* 115 */           }
/* 116 */           smj_leftRow = null;
/* 117 */         } else {
/* 118 */           smj_matches.add(smj_rightRow.copy());
/* 119 */           smj_rightRow = null;;
/* 120 */         }
/* 121 */       } while (smj_leftRow != null);
/* 122 */     }
/* 123 */     return false; // unreachable
/* 124 */   }
/* 125 */
/* 126 */   protected void processNext() throws java.io.IOException {
/* 127 */     while (findNextInnerJoinRows(smj_leftInput, smj_rightInput)) {
/* 128 */       int smj_size = smj_matches.size();
/* 129 */       smj_value6 = smj_leftRow.getStruct(0, 5);
/* 130 */       for (int smj_i = 0; smj_i < smj_size; smj_i ++) {
/* 131 */         InternalRow smj_rightRow1 = (InternalRow) smj_matches.get(smj_i);
/* 132 */
/* 133 */         smj_numOutputRows.add(1);
/* 134 */
/* 135 */         InternalRow smj_value7 = smj_rightRow1.getStruct(0, 5);
/* 136 */         smj_holder.reset();
/* 137 */
/* 138 */         // Remember the current cursor so that we can calculate how many bytes are
/* 139 */         // written later.
/* 140 */         final int smj_tmpCursor = smj_holder.cursor;
/* 141 */
/* 142 */         if (smj_value6 instanceof UnsafeRow) {
/* 143 */           final int smj_sizeInBytes = ((UnsafeRow) smj_value6).getSizeInBytes();
/* 144 */           // grow the global buffer before writing data.
/* 145 */           smj_holder.grow(smj_sizeInBytes);
/* 146 */           ((UnsafeRow) smj_value6).writeToMemory(smj_holder.buffer, smj_holder.cursor);
/* 147 */           smj_holder.cursor += smj_sizeInBytes;
/* 148 */
/* 149 */         } else {
/* 150 */           smj_rowWriter1.reset();
/* 151 */
/* 152 */           final int smj_fieldName = smj_value6.getInt(0);
/* 153 */           if (smj_value6.isNullAt(0)) {
/* 154 */             smj_rowWriter1.setNullAt(0);
/* 155 */           } else {
/* 156 */             smj_rowWriter1.write(0, smj_fieldName);
/* 157 */           }
/* 158 */
/* 159 */           final long smj_fieldName1 = smj_value6.getLong(1);
/* 160 */           if (smj_value6.isNullAt(1)) {
/* 161 */             smj_rowWriter1.setNullAt(1);
/* 162 */           } else {
/* 163 */             smj_rowWriter1.write(1, smj_fieldName1);
/* 164 */           }
/* 165 */
/* 166 */           final double smj_fieldName2 = smj_value6.getDouble(2);
/* 167 */           if (smj_value6.isNullAt(2)) {
/* 168 */             smj_rowWriter1.setNullAt(2);
/* 169 */           } else {
/* 170 */             smj_rowWriter1.write(2, smj_fieldName2);
/* 171 */           }
/* 172 */
/* 173 */           final float smj_fieldName3 = smj_value6.getFloat(3);
/* 174 */           if (smj_value6.isNullAt(3)) {
/* 175 */             smj_rowWriter1.setNullAt(3);
/* 176 */           } else {
/* 177 */             smj_rowWriter1.write(3, smj_fieldName3);
/* 178 */           }
/* 179 */
/* 180 */           final UTF8String smj_fieldName4 = smj_value6.getUTF8String(4);
/* 181 */           if (smj_value6.isNullAt(4)) {
/* 182 */             smj_rowWriter1.setNullAt(4);
/* 183 */           } else {
/* 184 */             smj_rowWriter1.write(4, smj_fieldName4);
/* 185 */           }
/* 186 */         }
/* 187 */
/* 188 */         smj_rowWriter.setOffsetAndSize(0, smj_tmpCursor, smj_holder.cursor - smj_tmpCursor);
/* 189 */
/* 190 */         // Remember the current cursor so that we can calculate how many bytes are
/* 191 */         // written later.
/* 192 */         final int smj_tmpCursor6 = smj_holder.cursor;
/* 193 */
/* 194 */         if (smj_value7 instanceof UnsafeRow) {
/* 195 */           final int smj_sizeInBytes1 = ((UnsafeRow) smj_value7).getSizeInBytes();
/* 196 */           // grow the global buffer before writing data.
/* 197 */           smj_holder.grow(smj_sizeInBytes1);
/* 198 */           ((UnsafeRow) smj_value7).writeToMemory(smj_holder.buffer, smj_holder.cursor);
/* 199 */           smj_holder.cursor += smj_sizeInBytes1;
/* 200 */
/* 201 */         } else {
/* 202 */           smj_rowWriter2.reset();
/* 203 */
/* 204 */           final int smj_fieldName5 = smj_value7.getInt(0);
/* 205 */           if (smj_value7.isNullAt(0)) {
/* 206 */             smj_rowWriter2.setNullAt(0);
/* 207 */           } else {
/* 208 */             smj_rowWriter2.write(0, smj_fieldName5);
/* 209 */           }
/* 210 */
/* 211 */           final long smj_fieldName6 = smj_value7.getLong(1);
/* 212 */           if (smj_value7.isNullAt(1)) {
/* 213 */             smj_rowWriter2.setNullAt(1);
/* 214 */           } else {
/* 215 */             smj_rowWriter2.write(1, smj_fieldName6);
/* 216 */           }
/* 217 */
/* 218 */           final double smj_fieldName7 = smj_value7.getDouble(2);
/* 219 */           if (smj_value7.isNullAt(2)) {
/* 220 */             smj_rowWriter2.setNullAt(2);
/* 221 */           } else {
/* 222 */             smj_rowWriter2.write(2, smj_fieldName7);
/* 223 */           }
/* 224 */
/* 225 */           final float smj_fieldName8 = smj_value7.getFloat(3);
/* 226 */           if (smj_value7.isNullAt(3)) {
/* 227 */             smj_rowWriter2.setNullAt(3);
/* 228 */           } else {
/* 229 */             smj_rowWriter2.write(3, smj_fieldName8);
/* 230 */           }
/* 231 */
/* 232 */           final UTF8String smj_fieldName9 = smj_value7.getUTF8String(4);
/* 233 */           if (smj_value7.isNullAt(4)) {
/* 234 */             smj_rowWriter2.setNullAt(4);
/* 235 */           } else {
/* 236 */             smj_rowWriter2.write(4, smj_fieldName9);
/* 237 */           }
/* 238 */         }
/* 239 */
/* 240 */         smj_rowWriter.setOffsetAndSize(1, smj_tmpCursor6, smj_holder.cursor - smj_tmpCursor6);
/* 241 */         smj_result.setTotalSize(smj_holder.totalSize());
/* 242 */         append(smj_result.copy());
/* 243 */
/* 244 */       }
/* 245 */       if (shouldStop()) return;
/* 246 */     }
/* 247 */   }
/* 248 */ }

17/01/23 13:38:42 22823 DEBUG CodeGenerator: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator smj_leftInput;
/* 009 */   private scala.collection.Iterator smj_rightInput;
/* 010 */   private InternalRow smj_leftRow;
/* 011 */   private InternalRow smj_rightRow;
/* 012 */   private int smj_value4;
/* 013 */   private java.util.ArrayList smj_matches;
/* 014 */   private int smj_value5;
/* 015 */   private InternalRow smj_value6;
/* 016 */   private org.apache.spark.sql.execution.metric.SQLMetric smj_numOutputRows;
/* 017 */   private UnsafeRow smj_result;
/* 018 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder smj_holder;
/* 019 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter smj_rowWriter;
/* 020 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter smj_rowWriter1;
/* 021 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter smj_rowWriter2;
/* 022 */
/* 023 */   public GeneratedIterator(Object[] references) {
/* 024 */     this.references = references;
/* 025 */   }
/* 026 */
/* 027 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 028 */     partitionIndex = index;
/* 029 */     this.inputs = inputs;
/* 030 */     smj_leftInput = inputs[0];
/* 031 */     smj_rightInput = inputs[1];
/* 032 */
/* 033 */     smj_rightRow = null;
/* 034 */
/* 035 */     smj_matches = new java.util.ArrayList();
/* 036 */
/* 037 */     this.smj_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 038 */     smj_result = new UnsafeRow(2);
/* 039 */     this.smj_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(smj_result, 64);
/* 040 */     this.smj_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(smj_holder, 2);
/* 041 */     this.smj_rowWriter1 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(smj_holder, 5);
/* 042 */     this.smj_rowWriter2 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(smj_holder, 5);
/* 043 */
/* 044 */   }
/* 045 */
/* 046 */   private boolean findNextInnerJoinRows(
/* 047 */     scala.collection.Iterator leftIter,
/* 048 */     scala.collection.Iterator rightIter) {
/* 049 */     smj_leftRow = null;
/* 050 */     int comp = 0;
/* 051 */     while (smj_leftRow == null) {
/* 052 */       if (!leftIter.hasNext()) return false;
/* 053 */       smj_leftRow = (InternalRow) leftIter.next();
/* 054 */
/* 055 */       InternalRow smj_value1 = smj_leftRow.getStruct(0, 5);
/* 056 */       boolean smj_isNull = false;
/* 057 */       int smj_value = -1;
/* 058 */
/* 059 */       if (smj_value1.isNullAt(0)) {
/* 060 */         smj_isNull = true;
/* 061 */       } else {
/* 062 */         smj_value = smj_value1.getInt(0);
/* 063 */       }
/* 064 */       if (smj_isNull) {
/* 065 */         smj_leftRow = null;
/* 066 */         continue;
/* 067 */       }
/* 068 */       if (!smj_matches.isEmpty()) {
/* 069 */         comp = 0;
/* 070 */         if (comp == 0) {
/* 071 */           comp = (smj_value > smj_value5 ? 1 : smj_value < smj_value5 ? -1 : 0);
/* 072 */         }
/* 073 */
/* 074 */         if (comp == 0) {
/* 075 */           return true;
/* 076 */         }
/* 077 */         smj_matches.clear();
/* 078 */       }
/* 079 */
/* 080 */       do {
/* 081 */         if (smj_rightRow == null) {
/* 082 */           if (!rightIter.hasNext()) {
/* 083 */             smj_value5 = smj_value;
/* 084 */             return !smj_matches.isEmpty();
/* 085 */           }
/* 086 */           smj_rightRow = (InternalRow) rightIter.next();
/* 087 */
/* 088 */           InternalRow smj_value3 = smj_rightRow.getStruct(0, 5);
/* 089 */           boolean smj_isNull2 = false;
/* 090 */           int smj_value2 = -1;
/* 091 */
/* 092 */           if (smj_value3.isNullAt(0)) {
/* 093 */             smj_isNull2 = true;
/* 094 */           } else {
/* 095 */             smj_value2 = smj_value3.getInt(0);
/* 096 */           }
/* 097 */           if (smj_isNull2) {
/* 098 */             smj_rightRow = null;
/* 099 */             continue;
/* 100 */           }
/* 101 */           smj_value4 = smj_value2;
/* 102 */         }
/* 103 */
/* 104 */         comp = 0;
/* 105 */         if (comp == 0) {
/* 106 */           comp = (smj_value > smj_value4 ? 1 : smj_value < smj_value4 ? -1 : 0);
/* 107 */         }
/* 108 */
/* 109 */         if (comp > 0) {
/* 110 */           smj_rightRow = null;
/* 111 */         } else if (comp < 0) {
/* 112 */           if (!smj_matches.isEmpty()) {
/* 113 */             smj_value5 = smj_value;
/* 114 */             return true;
/* 115 */           }
/* 116 */           smj_leftRow = null;
/* 117 */         } else {
/* 118 */           smj_matches.add(smj_rightRow.copy());
/* 119 */           smj_rightRow = null;;
/* 120 */         }
/* 121 */       } while (smj_leftRow != null);
/* 122 */     }
/* 123 */     return false; // unreachable
/* 124 */   }
/* 125 */
/* 126 */   protected void processNext() throws java.io.IOException {
/* 127 */     while (findNextInnerJoinRows(smj_leftInput, smj_rightInput)) {
/* 128 */       int smj_size = smj_matches.size();
/* 129 */       smj_value6 = smj_leftRow.getStruct(0, 5);
/* 130 */       for (int smj_i = 0; smj_i < smj_size; smj_i ++) {
/* 131 */         InternalRow smj_rightRow1 = (InternalRow) smj_matches.get(smj_i);
/* 132 */
/* 133 */         smj_numOutputRows.add(1);
/* 134 */
/* 135 */         InternalRow smj_value7 = smj_rightRow1.getStruct(0, 5);
/* 136 */         smj_holder.reset();
/* 137 */
/* 138 */         // Remember the current cursor so that we can calculate how many bytes are
/* 139 */         // written later.
/* 140 */         final int smj_tmpCursor = smj_holder.cursor;
/* 141 */
/* 142 */         if (smj_value6 instanceof UnsafeRow) {
/* 143 */           final int smj_sizeInBytes = ((UnsafeRow) smj_value6).getSizeInBytes();
/* 144 */           // grow the global buffer before writing data.
/* 145 */           smj_holder.grow(smj_sizeInBytes);
/* 146 */           ((UnsafeRow) smj_value6).writeToMemory(smj_holder.buffer, smj_holder.cursor);
/* 147 */           smj_holder.cursor += smj_sizeInBytes;
/* 148 */
/* 149 */         } else {
/* 150 */           smj_rowWriter1.reset();
/* 151 */
/* 152 */           final int smj_fieldName = smj_value6.getInt(0);
/* 153 */           if (smj_value6.isNullAt(0)) {
/* 154 */             smj_rowWriter1.setNullAt(0);
/* 155 */           } else {
/* 156 */             smj_rowWriter1.write(0, smj_fieldName);
/* 157 */           }
/* 158 */
/* 159 */           final long smj_fieldName1 = smj_value6.getLong(1);
/* 160 */           if (smj_value6.isNullAt(1)) {
/* 161 */             smj_rowWriter1.setNullAt(1);
/* 162 */           } else {
/* 163 */             smj_rowWriter1.write(1, smj_fieldName1);
/* 164 */           }
/* 165 */
/* 166 */           final double smj_fieldName2 = smj_value6.getDouble(2);
/* 167 */           if (smj_value6.isNullAt(2)) {
/* 168 */             smj_rowWriter1.setNullAt(2);
/* 169 */           } else {
/* 170 */             smj_rowWriter1.write(2, smj_fieldName2);
/* 171 */           }
/* 172 */
/* 173 */           final float smj_fieldName3 = smj_value6.getFloat(3);
/* 174 */           if (smj_value6.isNullAt(3)) {
/* 175 */             smj_rowWriter1.setNullAt(3);
/* 176 */           } else {
/* 177 */             smj_rowWriter1.write(3, smj_fieldName3);
/* 178 */           }
/* 179 */
/* 180 */           final UTF8String smj_fieldName4 = smj_value6.getUTF8String(4);
/* 181 */           if (smj_value6.isNullAt(4)) {
/* 182 */             smj_rowWriter1.setNullAt(4);
/* 183 */           } else {
/* 184 */             smj_rowWriter1.write(4, smj_fieldName4);
/* 185 */           }
/* 186 */         }
/* 187 */
/* 188 */         smj_rowWriter.setOffsetAndSize(0, smj_tmpCursor, smj_holder.cursor - smj_tmpCursor);
/* 189 */
/* 190 */         // Remember the current cursor so that we can calculate how many bytes are
/* 191 */         // written later.
/* 192 */         final int smj_tmpCursor6 = smj_holder.cursor;
/* 193 */
/* 194 */         if (smj_value7 instanceof UnsafeRow) {
/* 195 */           final int smj_sizeInBytes1 = ((UnsafeRow) smj_value7).getSizeInBytes();
/* 196 */           // grow the global buffer before writing data.
/* 197 */           smj_holder.grow(smj_sizeInBytes1);
/* 198 */           ((UnsafeRow) smj_value7).writeToMemory(smj_holder.buffer, smj_holder.cursor);
/* 199 */           smj_holder.cursor += smj_sizeInBytes1;
/* 200 */
/* 201 */         } else {
/* 202 */           smj_rowWriter2.reset();
/* 203 */
/* 204 */           final int smj_fieldName5 = smj_value7.getInt(0);
/* 205 */           if (smj_value7.isNullAt(0)) {
/* 206 */             smj_rowWriter2.setNullAt(0);
/* 207 */           } else {
/* 208 */             smj_rowWriter2.write(0, smj_fieldName5);
/* 209 */           }
/* 210 */
/* 211 */           final long smj_fieldName6 = smj_value7.getLong(1);
/* 212 */           if (smj_value7.isNullAt(1)) {
/* 213 */             smj_rowWriter2.setNullAt(1);
/* 214 */           } else {
/* 215 */             smj_rowWriter2.write(1, smj_fieldName6);
/* 216 */           }
/* 217 */
/* 218 */           final double smj_fieldName7 = smj_value7.getDouble(2);
/* 219 */           if (smj_value7.isNullAt(2)) {
/* 220 */             smj_rowWriter2.setNullAt(2);
/* 221 */           } else {
/* 222 */             smj_rowWriter2.write(2, smj_fieldName7);
/* 223 */           }
/* 224 */
/* 225 */           final float smj_fieldName8 = smj_value7.getFloat(3);
/* 226 */           if (smj_value7.isNullAt(3)) {
/* 227 */             smj_rowWriter2.setNullAt(3);
/* 228 */           } else {
/* 229 */             smj_rowWriter2.write(3, smj_fieldName8);
/* 230 */           }
/* 231 */
/* 232 */           final UTF8String smj_fieldName9 = smj_value7.getUTF8String(4);
/* 233 */           if (smj_value7.isNullAt(4)) {
/* 234 */             smj_rowWriter2.setNullAt(4);
/* 235 */           } else {
/* 236 */             smj_rowWriter2.write(4, smj_fieldName9);
/* 237 */           }
/* 238 */         }
/* 239 */
/* 240 */         smj_rowWriter.setOffsetAndSize(1, smj_tmpCursor6, smj_holder.cursor - smj_tmpCursor6);
/* 241 */         smj_result.setTotalSize(smj_holder.totalSize());
/* 242 */         append(smj_result.copy());
/* 243 */
/* 244 */       }
/* 245 */       if (shouldStop()) return;
/* 246 */     }
/* 247 */   }
/* 248 */ }

17/01/23 13:38:42 22911 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #96
17/01/23 13:38:42 22912 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #96
17/01/23 13:38:42 22914 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 4ms
17/01/23 13:38:42 22991 INFO CodeGenerator: Code generated in 233.378872 ms
17/01/23 13:38:42 22998 DEBUG WholeStageCodegenExec: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private boolean sort_needToSort;
/* 009 */   private org.apache.spark.sql.execution.SortExec sort_plan;
/* 010 */   private org.apache.spark.sql.execution.UnsafeExternalRowSorter sort_sorter;
/* 011 */   private org.apache.spark.executor.TaskMetrics sort_metrics;
/* 012 */   private scala.collection.Iterator<UnsafeRow> sort_sortedIter;
/* 013 */   private scala.collection.Iterator inputadapter_input;
/* 014 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_peakMemory;
/* 015 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_spillSize;
/* 016 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_sortTime;
/* 017 */
/* 018 */   public GeneratedIterator(Object[] references) {
/* 019 */     this.references = references;
/* 020 */   }
/* 021 */
/* 022 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */     partitionIndex = index;
/* 024 */     this.inputs = inputs;
/* 025 */     sort_needToSort = true;
/* 026 */     this.sort_plan = (org.apache.spark.sql.execution.SortExec) references[0];
/* 027 */     sort_sorter = sort_plan.createSorter();
/* 028 */     sort_metrics = org.apache.spark.TaskContext.get().taskMetrics();
/* 029 */
/* 030 */     inputadapter_input = inputs[0];
/* 031 */     this.sort_peakMemory = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 032 */     this.sort_spillSize = (org.apache.spark.sql.execution.metric.SQLMetric) references[2];
/* 033 */     this.sort_sortTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[3];
/* 034 */
/* 035 */   }
/* 036 */
/* 037 */   private void sort_addToSorter() throws java.io.IOException {
/* 038 */     while (inputadapter_input.hasNext()) {
/* 039 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 040 */       sort_sorter.insertRow((UnsafeRow)inputadapter_row);
/* 041 */       if (shouldStop()) return;
/* 042 */     }
/* 043 */
/* 044 */   }
/* 045 */
/* 046 */   protected void processNext() throws java.io.IOException {
/* 047 */     if (sort_needToSort) {
/* 048 */       long sort_spillSizeBefore = sort_metrics.memoryBytesSpilled();
/* 049 */       sort_addToSorter();
/* 050 */       sort_sortedIter = sort_sorter.sort();
/* 051 */       sort_sortTime.add(sort_sorter.getSortTimeNanos() / 1000000);
/* 052 */       sort_peakMemory.add(sort_sorter.getPeakMemoryUsage());
/* 053 */       sort_spillSize.add(sort_metrics.memoryBytesSpilled() - sort_spillSizeBefore);
/* 054 */       sort_metrics.incPeakExecutionMemory(sort_sorter.getPeakMemoryUsage());
/* 055 */       sort_needToSort = false;
/* 056 */     }
/* 057 */
/* 058 */     while (sort_sortedIter.hasNext()) {
/* 059 */       UnsafeRow sort_outputRow = (UnsafeRow)sort_sortedIter.next();
/* 060 */
/* 061 */       append(sort_outputRow);
/* 062 */
/* 063 */       if (shouldStop()) return;
/* 064 */     }
/* 065 */   }
/* 066 */ }

17/01/23 13:38:42 22999 DEBUG CodeGenerator: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private boolean sort_needToSort;
/* 009 */   private org.apache.spark.sql.execution.SortExec sort_plan;
/* 010 */   private org.apache.spark.sql.execution.UnsafeExternalRowSorter sort_sorter;
/* 011 */   private org.apache.spark.executor.TaskMetrics sort_metrics;
/* 012 */   private scala.collection.Iterator<UnsafeRow> sort_sortedIter;
/* 013 */   private scala.collection.Iterator inputadapter_input;
/* 014 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_peakMemory;
/* 015 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_spillSize;
/* 016 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_sortTime;
/* 017 */
/* 018 */   public GeneratedIterator(Object[] references) {
/* 019 */     this.references = references;
/* 020 */   }
/* 021 */
/* 022 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */     partitionIndex = index;
/* 024 */     this.inputs = inputs;
/* 025 */     sort_needToSort = true;
/* 026 */     this.sort_plan = (org.apache.spark.sql.execution.SortExec) references[0];
/* 027 */     sort_sorter = sort_plan.createSorter();
/* 028 */     sort_metrics = org.apache.spark.TaskContext.get().taskMetrics();
/* 029 */
/* 030 */     inputadapter_input = inputs[0];
/* 031 */     this.sort_peakMemory = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 032 */     this.sort_spillSize = (org.apache.spark.sql.execution.metric.SQLMetric) references[2];
/* 033 */     this.sort_sortTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[3];
/* 034 */
/* 035 */   }
/* 036 */
/* 037 */   private void sort_addToSorter() throws java.io.IOException {
/* 038 */     while (inputadapter_input.hasNext()) {
/* 039 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 040 */       sort_sorter.insertRow((UnsafeRow)inputadapter_row);
/* 041 */       if (shouldStop()) return;
/* 042 */     }
/* 043 */
/* 044 */   }
/* 045 */
/* 046 */   protected void processNext() throws java.io.IOException {
/* 047 */     if (sort_needToSort) {
/* 048 */       long sort_spillSizeBefore = sort_metrics.memoryBytesSpilled();
/* 049 */       sort_addToSorter();
/* 050 */       sort_sortedIter = sort_sorter.sort();
/* 051 */       sort_sortTime.add(sort_sorter.getSortTimeNanos() / 1000000);
/* 052 */       sort_peakMemory.add(sort_sorter.getPeakMemoryUsage());
/* 053 */       sort_spillSize.add(sort_metrics.memoryBytesSpilled() - sort_spillSizeBefore);
/* 054 */       sort_metrics.incPeakExecutionMemory(sort_sorter.getPeakMemoryUsage());
/* 055 */       sort_needToSort = false;
/* 056 */     }
/* 057 */
/* 058 */     while (sort_sortedIter.hasNext()) {
/* 059 */       UnsafeRow sort_outputRow = (UnsafeRow)sort_sortedIter.next();
/* 060 */
/* 061 */       append(sort_outputRow);
/* 062 */
/* 063 */       if (shouldStop()) return;
/* 064 */     }
/* 065 */   }
/* 066 */ }

17/01/23 13:38:42 23022 INFO CodeGenerator: Code generated in 24.009608 ms
17/01/23 13:38:42 23031 DEBUG package$ExpressionCanonicalizer: 
=== Result of Batch CleanExpressions ===
!named_struct(randInt, input[0, int, true], randLong, input[1, bigint, true], randDouble, input[2, double, true], randFloat, input[3, float, true], randString, input[4, string, true]) AS _1#22   named_struct(randInt, input[0, int, true], randLong, input[1, bigint, true], randDouble, input[2, double, true], randFloat, input[3, float, true], randString, input[4, string, true])
!+- named_struct(randInt, input[0, int, true], randLong, input[1, bigint, true], randDouble, input[2, double, true], randFloat, input[3, float, true], randString, input[4, string, true])         :- randInt
!   :- randInt                                                                                                                                                                                     :- input[0, int, true]
!   :- input[0, int, true]                                                                                                                                                                         :- randLong
!   :- randLong                                                                                                                                                                                    :- input[1, bigint, true]
!   :- input[1, bigint, true]                                                                                                                                                                      :- randDouble
!   :- randDouble                                                                                                                                                                                  :- input[2, double, true]
!   :- input[2, double, true]                                                                                                                                                                      :- randFloat
!   :- randFloat                                                                                                                                                                                   :- input[3, float, true]
!   :- input[3, float, true]                                                                                                                                                                       :- randString
!   :- randString                                                                                                                                                                                  +- input[4, string, true]
!   +- input[4, string, true]                                                                                                                                                                      
        
17/01/23 13:38:42 23038 DEBUG WholeStageCodegenExec: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator scan_input;
/* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric scan_numOutputRows;
/* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric scan_scanTime;
/* 011 */   private long scan_scanTime1;
/* 012 */   private long scan_totalRows;
/* 013 */   private org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
/* 014 */   private int scan_batchIdx;
/* 015 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance0;
/* 016 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance1;
/* 017 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance2;
/* 018 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance3;
/* 019 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance4;
/* 020 */   private UnsafeRow scan_result;
/* 021 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder scan_holder;
/* 022 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter scan_rowWriter;
/* 023 */   private Object[] project_values;
/* 024 */   private UnsafeRow project_result;
/* 025 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder project_holder;
/* 026 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter project_rowWriter;
/* 027 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter project_rowWriter1;
/* 028 */
/* 029 */   public GeneratedIterator(Object[] references) {
/* 030 */     this.references = references;
/* 031 */   }
/* 032 */
/* 033 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 034 */     partitionIndex = index;
/* 035 */     this.inputs = inputs;
/* 036 */     scan_input = inputs[0];
/* 037 */     this.scan_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 038 */     this.scan_scanTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 039 */     scan_scanTime1 = 0;
/* 040 */     scan_totalRows = 0;
/* 041 */     scan_batch = null;
/* 042 */     scan_batchIdx = 0;
/* 043 */     scan_colInstance0 = null;
/* 044 */     scan_colInstance1 = null;
/* 045 */     scan_colInstance2 = null;
/* 046 */     scan_colInstance3 = null;
/* 047 */     scan_colInstance4 = null;
/* 048 */     scan_result = new UnsafeRow(5);
/* 049 */     this.scan_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(scan_result, 32);
/* 050 */     this.scan_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(scan_holder, 5);
/* 051 */     this.project_values = null;
/* 052 */     project_result = new UnsafeRow(1);
/* 053 */     this.project_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(project_result, 32);
/* 054 */     this.project_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(project_holder, 1);
/* 055 */     this.project_rowWriter1 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(project_holder, 5);
/* 056 */
/* 057 */   }
/* 058 */
/* 059 */   private void scan_nextBatch() throws java.io.IOException {
/* 060 */     long getBatchStart = System.nanoTime();
/* 061 */     if (scan_input.hasNext()) {
/* 062 */       scan_batch = (org.apache.spark.sql.execution.vectorized.ColumnarBatch)scan_input.next();
/* 063 */       scan_numOutputRows.add(scan_batch.numRows());
/* 064 */       scan_batchIdx = 0;
/* 065 */       scan_totalRows+=scan_batch.numRows();
/* 066 */       scan_colInstance0 = scan_batch.column(0);
/* 067 */       scan_colInstance1 = scan_batch.column(1);
/* 068 */       scan_colInstance2 = scan_batch.column(2);
/* 069 */       scan_colInstance3 = scan_batch.column(3);
/* 070 */       scan_colInstance4 = scan_batch.column(4);
/* 071 */
/* 072 */     }
/* 073 */     scan_scanTime1 += System.nanoTime() - getBatchStart;
/* 074 */   }
/* 075 */
/* 076 */   protected void processNext() throws java.io.IOException {
/* 077 */     if (scan_batch == null) {
/* 078 */       scan_nextBatch();
/* 079 */     }
/* 080 */     while (scan_batch != null) {
/* 081 */       int numRows = scan_batch.numRows();
/* 082 */       while (scan_batchIdx < numRows) {
/* 083 */         int scan_rowIdx = scan_batchIdx++;
/* 084 */         project_values = new Object[5];
/* 085 */         boolean scan_isNull = scan_colInstance0.isNullAt(scan_rowIdx);
/* 086 */         int scan_value = scan_isNull ? -1 : (scan_colInstance0.getInt(scan_rowIdx));
/* 087 */         if (scan_isNull) {
/* 088 */           project_values[0] = null;
/* 089 */         } else {
/* 090 */           project_values[0] = scan_value;
/* 091 */         }
/* 092 */
/* 093 */         boolean scan_isNull1 = scan_colInstance1.isNullAt(scan_rowIdx);
/* 094 */         long scan_value1 = scan_isNull1 ? -1L : (scan_colInstance1.getLong(scan_rowIdx));
/* 095 */         if (scan_isNull1) {
/* 096 */           project_values[1] = null;
/* 097 */         } else {
/* 098 */           project_values[1] = scan_value1;
/* 099 */         }
/* 100 */
/* 101 */         boolean scan_isNull2 = scan_colInstance2.isNullAt(scan_rowIdx);
/* 102 */         double scan_value2 = scan_isNull2 ? -1.0 : (scan_colInstance2.getDouble(scan_rowIdx));
/* 103 */         if (scan_isNull2) {
/* 104 */           project_values[2] = null;
/* 105 */         } else {
/* 106 */           project_values[2] = scan_value2;
/* 107 */         }
/* 108 */
/* 109 */         boolean scan_isNull3 = scan_colInstance3.isNullAt(scan_rowIdx);
/* 110 */         float scan_value3 = scan_isNull3 ? -1.0f : (scan_colInstance3.getFloat(scan_rowIdx));
/* 111 */         if (scan_isNull3) {
/* 112 */           project_values[3] = null;
/* 113 */         } else {
/* 114 */           project_values[3] = scan_value3;
/* 115 */         }
/* 116 */
/* 117 */         boolean scan_isNull4 = scan_colInstance4.isNullAt(scan_rowIdx);
/* 118 */         UTF8String scan_value4 = scan_isNull4 ? null : (scan_colInstance4.getUTF8String(scan_rowIdx));
/* 119 */         if (scan_isNull4) {
/* 120 */           project_values[4] = null;
/* 121 */         } else {
/* 122 */           project_values[4] = scan_value4;
/* 123 */         }
/* 124 */         final InternalRow project_value = new org.apache.spark.sql.catalyst.expressions.GenericInternalRow(project_values);
/* 125 */         this.project_values = null;
/* 126 */         project_holder.reset();
/* 127 */
/* 128 */         // Remember the current cursor so that we can calculate how many bytes are
/* 129 */         // written later.
/* 130 */         final int project_tmpCursor = project_holder.cursor;
/* 131 */
/* 132 */         if (project_value instanceof UnsafeRow) {
/* 133 */           final int project_sizeInBytes = ((UnsafeRow) project_value).getSizeInBytes();
/* 134 */           // grow the global buffer before writing data.
/* 135 */           project_holder.grow(project_sizeInBytes);
/* 136 */           ((UnsafeRow) project_value).writeToMemory(project_holder.buffer, project_holder.cursor);
/* 137 */           project_holder.cursor += project_sizeInBytes;
/* 138 */
/* 139 */         } else {
/* 140 */           project_rowWriter1.reset();
/* 141 */
/* 142 */           final int project_fieldName = project_value.getInt(0);
/* 143 */           if (project_value.isNullAt(0)) {
/* 144 */             project_rowWriter1.setNullAt(0);
/* 145 */           } else {
/* 146 */             project_rowWriter1.write(0, project_fieldName);
/* 147 */           }
/* 148 */
/* 149 */           final long project_fieldName1 = project_value.getLong(1);
/* 150 */           if (project_value.isNullAt(1)) {
/* 151 */             project_rowWriter1.setNullAt(1);
/* 152 */           } else {
/* 153 */             project_rowWriter1.write(1, project_fieldName1);
/* 154 */           }
/* 155 */
/* 156 */           final double project_fieldName2 = project_value.getDouble(2);
/* 157 */           if (project_value.isNullAt(2)) {
/* 158 */             project_rowWriter1.setNullAt(2);
/* 159 */           } else {
/* 160 */             project_rowWriter1.write(2, project_fieldName2);
/* 161 */           }
/* 162 */
/* 163 */           final float project_fieldName3 = project_value.getFloat(3);
/* 164 */           if (project_value.isNullAt(3)) {
/* 165 */             project_rowWriter1.setNullAt(3);
/* 166 */           } else {
/* 167 */             project_rowWriter1.write(3, project_fieldName3);
/* 168 */           }
/* 169 */
/* 170 */           final UTF8String project_fieldName4 = project_value.getUTF8String(4);
/* 171 */           if (project_value.isNullAt(4)) {
/* 172 */             project_rowWriter1.setNullAt(4);
/* 173 */           } else {
/* 174 */             project_rowWriter1.write(4, project_fieldName4);
/* 175 */           }
/* 176 */         }
/* 177 */
/* 178 */         project_rowWriter.setOffsetAndSize(0, project_tmpCursor, project_holder.cursor - project_tmpCursor);
/* 179 */         project_result.setTotalSize(project_holder.totalSize());
/* 180 */         append(project_result);
/* 181 */         if (shouldStop()) return;
/* 182 */       }
/* 183 */       scan_batch = null;
/* 184 */       scan_nextBatch();
/* 185 */     }
/* 186 */     scan_scanTime.add(scan_scanTime1 / (1000 * 1000));
/* 187 */     if(false) {
/* 188 */       //long tx;
/* 189 */       //if (scan_totalRows == 0) tx = 0; else tx = (scan_scanTime1/scan_totalRows);
/* 190 */       //System.err.println("animesh - total scan time from RecordReaderIterator " + scan_scanTime1/1000 + " usec and project_result rows are: " + project_result.numFields() + " peakItems: " + peakItems + " appendCalled: " + appendCalled + " write/appendTime: " + project_rowWriter.getWriteTime()/1000 + " usec , time/item: " + tx + " nsec , rows: " + scan_totalRows);
/* 191 */     }
/* 192 */     scan_scanTime1 = 0;
/* 193 */   }
/* 194 */ }

17/01/23 13:38:42 23041 DEBUG CodeGenerator: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator scan_input;
/* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric scan_numOutputRows;
/* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric scan_scanTime;
/* 011 */   private long scan_scanTime1;
/* 012 */   private long scan_totalRows;
/* 013 */   private org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
/* 014 */   private int scan_batchIdx;
/* 015 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance0;
/* 016 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance1;
/* 017 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance2;
/* 018 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance3;
/* 019 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance4;
/* 020 */   private UnsafeRow scan_result;
/* 021 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder scan_holder;
/* 022 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter scan_rowWriter;
/* 023 */   private Object[] project_values;
/* 024 */   private UnsafeRow project_result;
/* 025 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder project_holder;
/* 026 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter project_rowWriter;
/* 027 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter project_rowWriter1;
/* 028 */
/* 029 */   public GeneratedIterator(Object[] references) {
/* 030 */     this.references = references;
/* 031 */   }
/* 032 */
/* 033 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 034 */     partitionIndex = index;
/* 035 */     this.inputs = inputs;
/* 036 */     scan_input = inputs[0];
/* 037 */     this.scan_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 038 */     this.scan_scanTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 039 */     scan_scanTime1 = 0;
/* 040 */     scan_totalRows = 0;
/* 041 */     scan_batch = null;
/* 042 */     scan_batchIdx = 0;
/* 043 */     scan_colInstance0 = null;
/* 044 */     scan_colInstance1 = null;
/* 045 */     scan_colInstance2 = null;
/* 046 */     scan_colInstance3 = null;
/* 047 */     scan_colInstance4 = null;
/* 048 */     scan_result = new UnsafeRow(5);
/* 049 */     this.scan_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(scan_result, 32);
/* 050 */     this.scan_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(scan_holder, 5);
/* 051 */     this.project_values = null;
/* 052 */     project_result = new UnsafeRow(1);
/* 053 */     this.project_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(project_result, 32);
/* 054 */     this.project_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(project_holder, 1);
/* 055 */     this.project_rowWriter1 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(project_holder, 5);
/* 056 */
/* 057 */   }
/* 058 */
/* 059 */   private void scan_nextBatch() throws java.io.IOException {
/* 060 */     long getBatchStart = System.nanoTime();
/* 061 */     if (scan_input.hasNext()) {
/* 062 */       scan_batch = (org.apache.spark.sql.execution.vectorized.ColumnarBatch)scan_input.next();
/* 063 */       scan_numOutputRows.add(scan_batch.numRows());
/* 064 */       scan_batchIdx = 0;
/* 065 */       scan_totalRows+=scan_batch.numRows();
/* 066 */       scan_colInstance0 = scan_batch.column(0);
/* 067 */       scan_colInstance1 = scan_batch.column(1);
/* 068 */       scan_colInstance2 = scan_batch.column(2);
/* 069 */       scan_colInstance3 = scan_batch.column(3);
/* 070 */       scan_colInstance4 = scan_batch.column(4);
/* 071 */
/* 072 */     }
/* 073 */     scan_scanTime1 += System.nanoTime() - getBatchStart;
/* 074 */   }
/* 075 */
/* 076 */   protected void processNext() throws java.io.IOException {
/* 077 */     if (scan_batch == null) {
/* 078 */       scan_nextBatch();
/* 079 */     }
/* 080 */     while (scan_batch != null) {
/* 081 */       int numRows = scan_batch.numRows();
/* 082 */       while (scan_batchIdx < numRows) {
/* 083 */         int scan_rowIdx = scan_batchIdx++;
/* 084 */         project_values = new Object[5];
/* 085 */         boolean scan_isNull = scan_colInstance0.isNullAt(scan_rowIdx);
/* 086 */         int scan_value = scan_isNull ? -1 : (scan_colInstance0.getInt(scan_rowIdx));
/* 087 */         if (scan_isNull) {
/* 088 */           project_values[0] = null;
/* 089 */         } else {
/* 090 */           project_values[0] = scan_value;
/* 091 */         }
/* 092 */
/* 093 */         boolean scan_isNull1 = scan_colInstance1.isNullAt(scan_rowIdx);
/* 094 */         long scan_value1 = scan_isNull1 ? -1L : (scan_colInstance1.getLong(scan_rowIdx));
/* 095 */         if (scan_isNull1) {
/* 096 */           project_values[1] = null;
/* 097 */         } else {
/* 098 */           project_values[1] = scan_value1;
/* 099 */         }
/* 100 */
/* 101 */         boolean scan_isNull2 = scan_colInstance2.isNullAt(scan_rowIdx);
/* 102 */         double scan_value2 = scan_isNull2 ? -1.0 : (scan_colInstance2.getDouble(scan_rowIdx));
/* 103 */         if (scan_isNull2) {
/* 104 */           project_values[2] = null;
/* 105 */         } else {
/* 106 */           project_values[2] = scan_value2;
/* 107 */         }
/* 108 */
/* 109 */         boolean scan_isNull3 = scan_colInstance3.isNullAt(scan_rowIdx);
/* 110 */         float scan_value3 = scan_isNull3 ? -1.0f : (scan_colInstance3.getFloat(scan_rowIdx));
/* 111 */         if (scan_isNull3) {
/* 112 */           project_values[3] = null;
/* 113 */         } else {
/* 114 */           project_values[3] = scan_value3;
/* 115 */         }
/* 116 */
/* 117 */         boolean scan_isNull4 = scan_colInstance4.isNullAt(scan_rowIdx);
/* 118 */         UTF8String scan_value4 = scan_isNull4 ? null : (scan_colInstance4.getUTF8String(scan_rowIdx));
/* 119 */         if (scan_isNull4) {
/* 120 */           project_values[4] = null;
/* 121 */         } else {
/* 122 */           project_values[4] = scan_value4;
/* 123 */         }
/* 124 */         final InternalRow project_value = new org.apache.spark.sql.catalyst.expressions.GenericInternalRow(project_values);
/* 125 */         this.project_values = null;
/* 126 */         project_holder.reset();
/* 127 */
/* 128 */         // Remember the current cursor so that we can calculate how many bytes are
/* 129 */         // written later.
/* 130 */         final int project_tmpCursor = project_holder.cursor;
/* 131 */
/* 132 */         if (project_value instanceof UnsafeRow) {
/* 133 */           final int project_sizeInBytes = ((UnsafeRow) project_value).getSizeInBytes();
/* 134 */           // grow the global buffer before writing data.
/* 135 */           project_holder.grow(project_sizeInBytes);
/* 136 */           ((UnsafeRow) project_value).writeToMemory(project_holder.buffer, project_holder.cursor);
/* 137 */           project_holder.cursor += project_sizeInBytes;
/* 138 */
/* 139 */         } else {
/* 140 */           project_rowWriter1.reset();
/* 141 */
/* 142 */           final int project_fieldName = project_value.getInt(0);
/* 143 */           if (project_value.isNullAt(0)) {
/* 144 */             project_rowWriter1.setNullAt(0);
/* 145 */           } else {
/* 146 */             project_rowWriter1.write(0, project_fieldName);
/* 147 */           }
/* 148 */
/* 149 */           final long project_fieldName1 = project_value.getLong(1);
/* 150 */           if (project_value.isNullAt(1)) {
/* 151 */             project_rowWriter1.setNullAt(1);
/* 152 */           } else {
/* 153 */             project_rowWriter1.write(1, project_fieldName1);
/* 154 */           }
/* 155 */
/* 156 */           final double project_fieldName2 = project_value.getDouble(2);
/* 157 */           if (project_value.isNullAt(2)) {
/* 158 */             project_rowWriter1.setNullAt(2);
/* 159 */           } else {
/* 160 */             project_rowWriter1.write(2, project_fieldName2);
/* 161 */           }
/* 162 */
/* 163 */           final float project_fieldName3 = project_value.getFloat(3);
/* 164 */           if (project_value.isNullAt(3)) {
/* 165 */             project_rowWriter1.setNullAt(3);
/* 166 */           } else {
/* 167 */             project_rowWriter1.write(3, project_fieldName3);
/* 168 */           }
/* 169 */
/* 170 */           final UTF8String project_fieldName4 = project_value.getUTF8String(4);
/* 171 */           if (project_value.isNullAt(4)) {
/* 172 */             project_rowWriter1.setNullAt(4);
/* 173 */           } else {
/* 174 */             project_rowWriter1.write(4, project_fieldName4);
/* 175 */           }
/* 176 */         }
/* 177 */
/* 178 */         project_rowWriter.setOffsetAndSize(0, project_tmpCursor, project_holder.cursor - project_tmpCursor);
/* 179 */         project_result.setTotalSize(project_holder.totalSize());
/* 180 */         append(project_result);
/* 181 */         if (shouldStop()) return;
/* 182 */       }
/* 183 */       scan_batch = null;
/* 184 */       scan_nextBatch();
/* 185 */     }
/* 186 */     scan_scanTime.add(scan_scanTime1 / (1000 * 1000));
/* 187 */     if(false) {
/* 188 */       //long tx;
/* 189 */       //if (scan_totalRows == 0) tx = 0; else tx = (scan_scanTime1/scan_totalRows);
/* 190 */       //System.err.println("animesh - total scan time from RecordReaderIterator " + scan_scanTime1/1000 + " usec and project_result rows are: " + project_result.numFields() + " peakItems: " + peakItems + " appendCalled: " + appendCalled + " write/appendTime: " + project_rowWriter.getWriteTime()/1000 + " usec , time/item: " + tx + " nsec , rows: " + scan_totalRows);
/* 191 */     }
/* 192 */     scan_scanTime1 = 0;
/* 193 */   }
/* 194 */ }

17/01/23 13:38:42 23072 INFO CodeGenerator: Code generated in 32.821131 ms
17/01/23 13:38:42 23162 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 299.4 KB, free 21.2 GB)
17/01/23 13:38:43 23349 DEBUG BlockManager: Put block broadcast_0 locally took  264 ms
17/01/23 13:38:43 23350 DEBUG BlockManager: Putting block broadcast_0 without replication took  265 ms
17/01/23 13:38:43 23467 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 65.2 KB, free 21.2 GB)
17/01/23 13:38:43 23469 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.40.0.24:52005 (size: 65.2 KB, free: 21.2 GB)
17/01/23 13:38:43 23471 DEBUG BlockManagerMaster: Updated info of block broadcast_0_piece0
17/01/23 13:38:43 23471 DEBUG BlockManager: Told master about block broadcast_0_piece0
17/01/23 13:38:43 23472 DEBUG BlockManager: Put block broadcast_0_piece0 locally took  7 ms
17/01/23 13:38:43 23472 DEBUG BlockManager: Putting block broadcast_0_piece0 without replication took  7 ms
17/01/23 13:38:43 23474 INFO SparkContext: Created broadcast 0 from save at <console>:38
17/01/23 13:38:43 23476 INFO ParquetFileFormat: Building a ParquetFileReader with batching: true struct : struct<randInt:int,randLong:bigint,randDouble:double,randFloat:float,randString:string>
17/01/23 13:38:43 23483 INFO FileSourceScanExec: Planning scan with bin packing, max size: 1280255769 bytes, formula is : Min(1280255769, max(0, 1280255769)), defaultParallism: 10, totalBytes: 12802557690, approx 10 tasks selectedPartition has : 1 items, these are tasks.
17/01/23 13:38:43 23500 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) +++
17/01/23 13:38:43 23516 DEBUG ClosureCleaner:  + declared fields: 4
17/01/23 13:38:43 23516 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.serialVersionUID
17/01/23 13:38:43 23516 DEBUG ClosureCleaner:      private final org.apache.spark.sql.catalyst.expressions.codegen.CodeAndComment org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.cleanedSource$2
17/01/23 13:38:43 23516 DEBUG ClosureCleaner:      private final java.lang.Object[] org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.references$1
17/01/23 13:38:43 23516 DEBUG ClosureCleaner:      public final org.apache.spark.sql.execution.metric.SQLMetric org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.durationMs$1
17/01/23 13:38:43 23517 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23517 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23517 DEBUG ClosureCleaner:      public final scala.collection.Iterator org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(int,scala.collection.Iterator)
17/01/23 13:38:43 23518 DEBUG ClosureCleaner:  + inner classes: 1
17/01/23 13:38:43 23518 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1
17/01/23 13:38:43 23519 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23519 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23521 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23526 DEBUG ClosureCleaner:  + fields accessed by starting closure: 0
17/01/23 13:38:43 23527 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23527 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) is now cleaned +++
17/01/23 13:38:43 23549 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) +++
17/01/23 13:38:43 23550 DEBUG ClosureCleaner:  + declared fields: 4
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.serialVersionUID
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      private final org.apache.spark.sql.catalyst.expressions.codegen.CodeAndComment org.apache.spark.sql.anonfun.execution$$WholeStageCodegenExec$8.cleanedSource$2
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      private final java.lang.Object[] org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.references$1
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      public final org.apache.spark.sql.execution.metric.SQLMetric org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.durationMs$1
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      public final scala.collection.Iterator org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(int,scala.collection.Iterator)
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:  + inner classes: 1
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23551 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23553 DEBUG ClosureCleaner:  + fields accessed by starting closure: 0
17/01/23 13:38:43 23553 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23553 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) is now cleaned +++
17/01/23 13:38:43 23609 DEBUG WholeStageCodegenExec: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private boolean sort_needToSort;
/* 009 */   private org.apache.spark.sql.execution.SortExec sort_plan;
/* 010 */   private org.apache.spark.sql.execution.UnsafeExternalRowSorter sort_sorter;
/* 011 */   private org.apache.spark.executor.TaskMetrics sort_metrics;
/* 012 */   private scala.collection.Iterator<UnsafeRow> sort_sortedIter;
/* 013 */   private scala.collection.Iterator inputadapter_input;
/* 014 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_peakMemory;
/* 015 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_spillSize;
/* 016 */   private org.apache.spark.sql.execution.metric.SQLMetric sort_sortTime;
/* 017 */
/* 018 */   public GeneratedIterator(Object[] references) {
/* 019 */     this.references = references;
/* 020 */   }
/* 021 */
/* 022 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 023 */     partitionIndex = index;
/* 024 */     this.inputs = inputs;
/* 025 */     sort_needToSort = true;
/* 026 */     this.sort_plan = (org.apache.spark.sql.execution.SortExec) references[0];
/* 027 */     sort_sorter = sort_plan.createSorter();
/* 028 */     sort_metrics = org.apache.spark.TaskContext.get().taskMetrics();
/* 029 */
/* 030 */     inputadapter_input = inputs[0];
/* 031 */     this.sort_peakMemory = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 032 */     this.sort_spillSize = (org.apache.spark.sql.execution.metric.SQLMetric) references[2];
/* 033 */     this.sort_sortTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[3];
/* 034 */
/* 035 */   }
/* 036 */
/* 037 */   private void sort_addToSorter() throws java.io.IOException {
/* 038 */     while (inputadapter_input.hasNext()) {
/* 039 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 040 */       sort_sorter.insertRow((UnsafeRow)inputadapter_row);
/* 041 */       if (shouldStop()) return;
/* 042 */     }
/* 043 */
/* 044 */   }
/* 045 */
/* 046 */   protected void processNext() throws java.io.IOException {
/* 047 */     if (sort_needToSort) {
/* 048 */       long sort_spillSizeBefore = sort_metrics.memoryBytesSpilled();
/* 049 */       sort_addToSorter();
/* 050 */       sort_sortedIter = sort_sorter.sort();
/* 051 */       sort_sortTime.add(sort_sorter.getSortTimeNanos() / 1000000);
/* 052 */       sort_peakMemory.add(sort_sorter.getPeakMemoryUsage());
/* 053 */       sort_spillSize.add(sort_metrics.memoryBytesSpilled() - sort_spillSizeBefore);
/* 054 */       sort_metrics.incPeakExecutionMemory(sort_sorter.getPeakMemoryUsage());
/* 055 */       sort_needToSort = false;
/* 056 */     }
/* 057 */
/* 058 */     while (sort_sortedIter.hasNext()) {
/* 059 */       UnsafeRow sort_outputRow = (UnsafeRow)sort_sortedIter.next();
/* 060 */
/* 061 */       append(sort_outputRow);
/* 062 */
/* 063 */       if (shouldStop()) return;
/* 064 */     }
/* 065 */   }
/* 066 */ }

17/01/23 13:38:43 23612 DEBUG package$ExpressionCanonicalizer: 
=== Result of Batch CleanExpressions ===
!named_struct(randInt, input[0, int, true], randLong, input[1, bigint, true], randDouble, input[2, double, true], randFloat, input[3, float, true], randString, input[4, string, true]) AS _2#23   named_struct(randInt, input[0, int, true], randLong, input[1, bigint, true], randDouble, input[2, double, true], randFloat, input[3, float, true], randString, input[4, string, true])
!+- named_struct(randInt, input[0, int, true], randLong, input[1, bigint, true], randDouble, input[2, double, true], randFloat, input[3, float, true], randString, input[4, string, true])         :- randInt
!   :- randInt                                                                                                                                                                                     :- input[0, int, true]
!   :- input[0, int, true]                                                                                                                                                                         :- randLong
!   :- randLong                                                                                                                                                                                    :- input[1, bigint, true]
!   :- input[1, bigint, true]                                                                                                                                                                      :- randDouble
!   :- randDouble                                                                                                                                                                                  :- input[2, double, true]
!   :- input[2, double, true]                                                                                                                                                                      :- randFloat
!   :- randFloat                                                                                                                                                                                   :- input[3, float, true]
!   :- input[3, float, true]                                                                                                                                                                       :- randString
!   :- randString                                                                                                                                                                                  +- input[4, string, true]
!   +- input[4, string, true]                                                                                                                                                                      
        
17/01/23 13:38:43 23616 DEBUG WholeStageCodegenExec: 
/* 001 */ public Object generate(Object[] references) {
/* 002 */   return new GeneratedIterator(references);
/* 003 */ }
/* 004 */
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator scan_input;
/* 009 */   private org.apache.spark.sql.execution.metric.SQLMetric scan_numOutputRows;
/* 010 */   private org.apache.spark.sql.execution.metric.SQLMetric scan_scanTime;
/* 011 */   private long scan_scanTime1;
/* 012 */   private long scan_totalRows;
/* 013 */   private org.apache.spark.sql.execution.vectorized.ColumnarBatch scan_batch;
/* 014 */   private int scan_batchIdx;
/* 015 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance0;
/* 016 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance1;
/* 017 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance2;
/* 018 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance3;
/* 019 */   private org.apache.spark.sql.execution.vectorized.ColumnVector scan_colInstance4;
/* 020 */   private UnsafeRow scan_result;
/* 021 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder scan_holder;
/* 022 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter scan_rowWriter;
/* 023 */   private Object[] project_values;
/* 024 */   private UnsafeRow project_result;
/* 025 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder project_holder;
/* 026 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter project_rowWriter;
/* 027 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter project_rowWriter1;
/* 028 */
/* 029 */   public GeneratedIterator(Object[] references) {
/* 030 */     this.references = references;
/* 031 */   }
/* 032 */
/* 033 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 034 */     partitionIndex = index;
/* 035 */     this.inputs = inputs;
/* 036 */     scan_input = inputs[0];
/* 037 */     this.scan_numOutputRows = (org.apache.spark.sql.execution.metric.SQLMetric) references[0];
/* 038 */     this.scan_scanTime = (org.apache.spark.sql.execution.metric.SQLMetric) references[1];
/* 039 */     scan_scanTime1 = 0;
/* 040 */     scan_totalRows = 0;
/* 041 */     scan_batch = null;
/* 042 */     scan_batchIdx = 0;
/* 043 */     scan_colInstance0 = null;
/* 044 */     scan_colInstance1 = null;
/* 045 */     scan_colInstance2 = null;
/* 046 */     scan_colInstance3 = null;
/* 047 */     scan_colInstance4 = null;
/* 048 */     scan_result = new UnsafeRow(5);
/* 049 */     this.scan_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(scan_result, 32);
/* 050 */     this.scan_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(scan_holder, 5);
/* 051 */     this.project_values = null;
/* 052 */     project_result = new UnsafeRow(1);
/* 053 */     this.project_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(project_result, 32);
/* 054 */     this.project_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(project_holder, 1);
/* 055 */     this.project_rowWriter1 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(project_holder, 5);
/* 056 */
/* 057 */   }
/* 058 */
/* 059 */   private void scan_nextBatch() throws java.io.IOException {
/* 060 */     long getBatchStart = System.nanoTime();
/* 061 */     if (scan_input.hasNext()) {
/* 062 */       scan_batch = (org.apache.spark.sql.execution.vectorized.ColumnarBatch)scan_input.next();
/* 063 */       scan_numOutputRows.add(scan_batch.numRows());
/* 064 */       scan_batchIdx = 0;
/* 065 */       scan_totalRows+=scan_batch.numRows();
/* 066 */       scan_colInstance0 = scan_batch.column(0);
/* 067 */       scan_colInstance1 = scan_batch.column(1);
/* 068 */       scan_colInstance2 = scan_batch.column(2);
/* 069 */       scan_colInstance3 = scan_batch.column(3);
/* 070 */       scan_colInstance4 = scan_batch.column(4);
/* 071 */
/* 072 */     }
/* 073 */     scan_scanTime1 += System.nanoTime() - getBatchStart;
/* 074 */   }
/* 075 */
/* 076 */   protected void processNext() throws java.io.IOException {
/* 077 */     if (scan_batch == null) {
/* 078 */       scan_nextBatch();
/* 079 */     }
/* 080 */     while (scan_batch != null) {
/* 081 */       int numRows = scan_batch.numRows();
/* 082 */       while (scan_batchIdx < numRows) {
/* 083 */         int scan_rowIdx = scan_batchIdx++;
/* 084 */         project_values = new Object[5];
/* 085 */         boolean scan_isNull = scan_colInstance0.isNullAt(scan_rowIdx);
/* 086 */         int scan_value = scan_isNull ? -1 : (scan_colInstance0.getInt(scan_rowIdx));
/* 087 */         if (scan_isNull) {
/* 088 */           project_values[0] = null;
/* 089 */         } else {
/* 090 */           project_values[0] = scan_value;
/* 091 */         }
/* 092 */
/* 093 */         boolean scan_isNull1 = scan_colInstance1.isNullAt(scan_rowIdx);
/* 094 */         long scan_value1 = scan_isNull1 ? -1L : (scan_colInstance1.getLong(scan_rowIdx));
/* 095 */         if (scan_isNull1) {
/* 096 */           project_values[1] = null;
/* 097 */         } else {
/* 098 */           project_values[1] = scan_value1;
/* 099 */         }
/* 100 */
/* 101 */         boolean scan_isNull2 = scan_colInstance2.isNullAt(scan_rowIdx);
/* 102 */         double scan_value2 = scan_isNull2 ? -1.0 : (scan_colInstance2.getDouble(scan_rowIdx));
/* 103 */         if (scan_isNull2) {
/* 104 */           project_values[2] = null;
/* 105 */         } else {
/* 106 */           project_values[2] = scan_value2;
/* 107 */         }
/* 108 */
/* 109 */         boolean scan_isNull3 = scan_colInstance3.isNullAt(scan_rowIdx);
/* 110 */         float scan_value3 = scan_isNull3 ? -1.0f : (scan_colInstance3.getFloat(scan_rowIdx));
/* 111 */         if (scan_isNull3) {
/* 112 */           project_values[3] = null;
/* 113 */         } else {
/* 114 */           project_values[3] = scan_value3;
/* 115 */         }
/* 116 */
/* 117 */         boolean scan_isNull4 = scan_colInstance4.isNullAt(scan_rowIdx);
/* 118 */         UTF8String scan_value4 = scan_isNull4 ? null : (scan_colInstance4.getUTF8String(scan_rowIdx));
/* 119 */         if (scan_isNull4) {
/* 120 */           project_values[4] = null;
/* 121 */         } else {
/* 122 */           project_values[4] = scan_value4;
/* 123 */         }
/* 124 */         final InternalRow project_value = new org.apache.spark.sql.catalyst.expressions.GenericInternalRow(project_values);
/* 125 */         this.project_values = null;
/* 126 */         project_holder.reset();
/* 127 */
/* 128 */         // Remember the current cursor so that we can calculate how many bytes are
/* 129 */         // written later.
/* 130 */         final int project_tmpCursor = project_holder.cursor;
/* 131 */
/* 132 */         if (project_value instanceof UnsafeRow) {
/* 133 */           final int project_sizeInBytes = ((UnsafeRow) project_value).getSizeInBytes();
/* 134 */           // grow the global buffer before writing data.
/* 135 */           project_holder.grow(project_sizeInBytes);
/* 136 */           ((UnsafeRow) project_value).writeToMemory(project_holder.buffer, project_holder.cursor);
/* 137 */           project_holder.cursor += project_sizeInBytes;
/* 138 */
/* 139 */         } else {
/* 140 */           project_rowWriter1.reset();
/* 141 */
/* 142 */           final int project_fieldName = project_value.getInt(0);
/* 143 */           if (project_value.isNullAt(0)) {
/* 144 */             project_rowWriter1.setNullAt(0);
/* 145 */           } else {
/* 146 */             project_rowWriter1.write(0, project_fieldName);
/* 147 */           }
/* 148 */
/* 149 */           final long project_fieldName1 = project_value.getLong(1);
/* 150 */           if (project_value.isNullAt(1)) {
/* 151 */             project_rowWriter1.setNullAt(1);
/* 152 */           } else {
/* 153 */             project_rowWriter1.write(1, project_fieldName1);
/* 154 */           }
/* 155 */
/* 156 */           final double project_fieldName2 = project_value.getDouble(2);
/* 157 */           if (project_value.isNullAt(2)) {
/* 158 */             project_rowWriter1.setNullAt(2);
/* 159 */           } else {
/* 160 */             project_rowWriter1.write(2, project_fieldName2);
/* 161 */           }
/* 162 */
/* 163 */           final float project_fieldName3 = project_value.getFloat(3);
/* 164 */           if (project_value.isNullAt(3)) {
/* 165 */             project_rowWriter1.setNullAt(3);
/* 166 */           } else {
/* 167 */             project_rowWriter1.write(3, project_fieldName3);
/* 168 */           }
/* 169 */
/* 170 */           final UTF8String project_fieldName4 = project_value.getUTF8String(4);
/* 171 */           if (project_value.isNullAt(4)) {
/* 172 */             project_rowWriter1.setNullAt(4);
/* 173 */           } else {
/* 174 */             project_rowWriter1.write(4, project_fieldName4);
/* 175 */           }
/* 176 */         }
/* 177 */
/* 178 */         project_rowWriter.setOffsetAndSize(0, project_tmpCursor, project_holder.cursor - project_tmpCursor);
/* 179 */         project_result.setTotalSize(project_holder.totalSize());
/* 180 */         append(project_result);
/* 181 */         if (shouldStop()) return;
/* 182 */       }
/* 183 */       scan_batch = null;
/* 184 */       scan_nextBatch();
/* 185 */     }
/* 186 */     scan_scanTime.add(scan_scanTime1 / (1000 * 1000));
/* 187 */     if(false) {
/* 188 */       //long tx;
/* 189 */       //if (scan_totalRows == 0) tx = 0; else tx = (scan_scanTime1/scan_totalRows);
/* 190 */       //System.err.println("animesh - total scan time from RecordReaderIterator " + scan_scanTime1/1000 + " usec and project_result rows are: " + project_result.numFields() + " peakItems: " + peakItems + " appendCalled: " + appendCalled + " write/appendTime: " + project_rowWriter.getWriteTime()/1000 + " usec , time/item: " + tx + " nsec , rows: " + scan_totalRows);
/* 191 */     }
/* 192 */     scan_scanTime1 = 0;
/* 193 */   }
/* 194 */ }

17/01/23 13:38:43 23623 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 299.4 KB, free 21.2 GB)
17/01/23 13:38:43 23623 DEBUG BlockManager: Put block broadcast_1 locally took  5 ms
17/01/23 13:38:43 23623 DEBUG BlockManager: Putting block broadcast_1 without replication took  5 ms
17/01/23 13:38:43 23655 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 65.2 KB, free 21.2 GB)
17/01/23 13:38:43 23656 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.40.0.24:52005 (size: 65.2 KB, free: 21.2 GB)
17/01/23 13:38:43 23657 DEBUG BlockManagerMaster: Updated info of block broadcast_1_piece0
17/01/23 13:38:43 23657 DEBUG BlockManager: Told master about block broadcast_1_piece0
17/01/23 13:38:43 23657 DEBUG BlockManager: Put block broadcast_1_piece0 locally took  2 ms
17/01/23 13:38:43 23657 DEBUG BlockManager: Putting block broadcast_1_piece0 without replication took  2 ms
17/01/23 13:38:43 23658 INFO SparkContext: Created broadcast 1 from save at <console>:38
17/01/23 13:38:43 23658 INFO ParquetFileFormat: Building a ParquetFileReader with batching: true struct : struct<randInt:int,randLong:bigint,randDouble:double,randFloat:float,randString:string>
17/01/23 13:38:43 23659 INFO FileSourceScanExec: Planning scan with bin packing, max size: 1280255769 bytes, formula is : Min(1280255769, max(0, 1280255769)), defaultParallism: 10, totalBytes: 12802557690, approx 10 tasks selectedPartition has : 1 items, these are tasks.
17/01/23 13:38:43 23661 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) +++
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:  + declared fields: 4
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.serialVersionUID
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      private final org.apache.spark.sql.catalyst.expressions.codegen.CodeAndComment org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.cleanedSource$2
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      private final java.lang.Object[] org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.references$1
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      public final org.apache.spark.sql.execution.metric.SQLMetric org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.durationMs$1
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      public final scala.collection.Iterator org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(int,scala.collection.Iterator)
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:  + inner classes: 1
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23662 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23663 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23664 DEBUG ClosureCleaner:  + fields accessed by starting closure: 0
17/01/23 13:38:43 23664 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23664 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) is now cleaned +++
17/01/23 13:38:43 23667 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) +++
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:  + declared fields: 4
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.serialVersionUID
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      private final org.apache.spark.sql.catalyst.expressions.codegen.CodeAndComment org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.cleanedSource$2
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      private final java.lang.Object[] org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.references$1
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      public final org.apache.spark.sql.execution.metric.SQLMetric org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.durationMs$1
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      public final scala.collection.Iterator org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8.apply(int,scala.collection.Iterator)
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:  + inner classes: 1
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23668 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23669 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23670 DEBUG ClosureCleaner:  + fields accessed by starting closure: 0
17/01/23 13:38:43 23670 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23670 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8) is now cleaned +++
17/01/23 13:38:43 23677 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$9) +++
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + declared fields: 1
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$9.serialVersionUID
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$9.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:      public final scala.collection.Iterator org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$9.apply(scala.collection.Iterator,scala.collection.Iterator)
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + inner classes: 0
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23677 DEBUG ClosureCleaner:  + fields accessed by starting closure: 0
17/01/23 13:38:43 23678 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23678 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$9) is now cleaned +++
17/01/23 13:38:43 23680 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10) +++
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:  + declared fields: 4
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.serialVersionUID
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      private final org.apache.spark.sql.catalyst.expressions.codegen.CodeAndComment org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.cleanedSource$2
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      private final java.lang.Object[] org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.references$1
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      public final org.apache.spark.sql.execution.metric.SQLMetric org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.durationMs$1
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      public final scala.collection.Iterator org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(int,scala.collection.Iterator)
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:  + inner classes: 1
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$2
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23682 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23683 DEBUG ClosureCleaner:  + fields accessed by starting closure: 0
17/01/23 13:38:43 23683 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23683 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10) is now cleaned +++
17/01/23 13:38:43 23690 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3) +++
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:  + declared fields: 2
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.serialVersionUID
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:      private final org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1 org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.$outer
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:  + declared methods: 2
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(java.lang.Object,java.lang.Object)
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:      public final scala.Tuple2 org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(org.apache.spark.TaskContext,scala.collection.Iterator)
17/01/23 13:38:43 23690 DEBUG ClosureCleaner:  + inner classes: 0
17/01/23 13:38:43 23691 DEBUG ClosureCleaner:  + outer classes: 1
17/01/23 13:38:43 23691 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1
17/01/23 13:38:43 23691 DEBUG ClosureCleaner:  + outer objects: 1
17/01/23 13:38:43 23691 DEBUG ClosureCleaner:      <function0>
17/01/23 13:38:43 23692 DEBUG ClosureCleaner:  + populating accessed fields because this is the starting closure
17/01/23 13:38:43 23692 DEBUG ClosureCleaner:  + fields accessed by starting closure: 1
17/01/23 13:38:43 23693 DEBUG ClosureCleaner:      (class org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1,Set(description$1, committer$1))
17/01/23 13:38:43 23693 DEBUG ClosureCleaner:  + outermost object is a closure, so we clone it: (class org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1,<function0>)
17/01/23 13:38:43 23694 DEBUG ClosureCleaner:  + cloning the object <function0> of class org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1
17/01/23 13:38:43 23695 DEBUG ClosureCleaner:  + cleaning cloned closure <function0> recursively (org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1)
17/01/23 13:38:43 23695 DEBUG ClosureCleaner: +++ Cleaning closure <function0> (org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1) +++
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + declared fields: 7
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public static final long org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.serialVersionUID
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      private final org.apache.spark.sql.SparkSession org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.sparkSession$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      private final org.apache.spark.sql.execution.QueryExecution org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.queryExecution$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public final org.apache.spark.internal.io.FileCommitProtocol org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.committer$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      private final scala.Function1 org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.refreshFunction$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public final org.apache.hadoop.mapreduce.Job org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.job$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public final org.apache.spark.sql.execution.datasources.FileFormatWriter$WriteJobDescription org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.description$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + declared methods: 3
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public void org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp()
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public final void org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply()
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      public final java.lang.Object org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply()
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + inner classes: 6
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$2
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$6
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$5
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$4
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + outer classes: 0
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + outer objects: 0
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + fields accessed by starting closure: 1
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:      (class org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1,Set(description$1, committer$1))
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  + there are no enclosing objects!
17/01/23 13:38:43 23699 DEBUG ClosureCleaner:  +++ closure <function0> (org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1) is now cleaned +++
17/01/23 13:38:43 23700 DEBUG ClosureCleaner:  +++ closure <function2> (org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3) is now cleaned +++
17/01/23 13:38:43 23734 INFO SparkContext: Starting job: save at <console>:38
17/01/23 13:38:43 23735 DEBUG DAGScheduler: [atr] scheduling job with #partitions 10
17/01/23 13:38:43 23749 INFO DAGScheduler: Registering RDD 7 (save at <console>:38)
17/01/23 13:38:43 23750 INFO DAGScheduler: Registering RDD 2 (save at <console>:38)
17/01/23 13:38:43 23751 INFO DAGScheduler: Got job 0 (save at <console>:38) with 10 output partitions
17/01/23 13:38:43 23752 INFO DAGScheduler: Final stage: ResultStage 2 (save at <console>:38)
17/01/23 13:38:43 23753 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0, ShuffleMapStage 1)
17/01/23 13:38:43 23754 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0, ShuffleMapStage 1)
17/01/23 13:38:43 23756 DEBUG DAGScheduler: submitStage(ResultStage 2)
17/01/23 13:38:43 23756 DEBUG DAGScheduler: missing: List(ShuffleMapStage 0, ShuffleMapStage 1)
17/01/23 13:38:43 23757 DEBUG DAGScheduler: submitStage(ShuffleMapStage 0)
17/01/23 13:38:43 23757 DEBUG DAGScheduler: missing: List()
17/01/23 13:38:43 23758 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[7] at save at <console>:38), which has no missing parents
17/01/23 13:38:43 23758 DEBUG DAGScheduler: submitMissingTasks(ShuffleMapStage 0)
17/01/23 13:38:43 23759 DEBUG DAGScheduler: [atr] number of partitions to compute 10
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 0 prefers Host (head:0) : flex22.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 1 prefers Host (head:0) : flex16.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 2 prefers Host (head:0) : flex23.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 3 prefers Host (head:0) : flex20.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 4 prefers Host (head:0) : flex15.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 5 prefers Host (head:0) : flex21.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 6 prefers Host (head:0) : flex19.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 7 prefers Host (head:0) : flex14.zurich.ibm.com
17/01/23 13:38:43 23764 DEBUG DAGScheduler: [atr] partitionID: 8 prefers Host (head:0) : flex18.zurich.ibm.com
17/01/23 13:38:43 23765 DEBUG DAGScheduler: [atr] partitionID: 9 prefers Host (head:0) : flex13.zurich.ibm.com
17/01/23 13:38:43 23773 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 15.0 KB, free 21.2 GB)
17/01/23 13:38:43 23774 DEBUG BlockManager: Put block broadcast_2 locally took  2 ms
17/01/23 13:38:43 23774 DEBUG BlockManager: Putting block broadcast_2 without replication took  2 ms
17/01/23 13:38:43 23784 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 15.0 KB, free 21.2 GB)
17/01/23 13:38:43 23785 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.40.0.24:52005 (size: 15.0 KB, free: 21.2 GB)
17/01/23 13:38:43 23785 DEBUG BlockManagerMaster: Updated info of block broadcast_2_piece0
17/01/23 13:38:43 23785 DEBUG BlockManager: Told master about block broadcast_2_piece0
17/01/23 13:38:43 23785 DEBUG BlockManager: Put block broadcast_2_piece0 locally took  1 ms
17/01/23 13:38:43 23785 DEBUG BlockManager: Putting block broadcast_2_piece0 without replication took  1 ms
17/01/23 13:38:43 23785 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1004
17/01/23 13:38:43 23790 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=23, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=36352
17/01/23 13:38:43 23790 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 44622 lastFlushOffset 36774
17/01/23 13:38:43 23790 INFO DAGScheduler: Submitting 10 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[7] at save at <console>:38)
17/01/23 13:38:43 23790 DEBUG DFSClient: Queued packet 23
17/01/23 13:38:43 23790 DEBUG DFSClient: Waiting for ack for: 23
17/01/23 13:38:43 23790 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:23 offsetInBlock:36352 lastPacketInBlock:false lastByteOffsetInBlock: 44622
17/01/23 13:38:43 23791 DEBUG DAGScheduler: New pending partitions: Set(0, 9, 1, 5, 2, 6, 3, 7, 4, 8)
17/01/23 13:38:43 23791 DEBUG DFSClient: DFSClient seqno: 23 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:43 23791 INFO YarnScheduler: Adding task set 0.0 with 10 tasks
17/01/23 13:38:43 23797 DEBUG TaskSetManager: Epoch for TaskSet 0.0: 0
17/01/23 13:38:43 23990 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #97
17/01/23 13:38:43 23991 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #97
17/01/23 13:38:43 23991 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 2ms
17/01/23 13:38:43 23996 DEBUG TaskSetManager: Valid locality levels for TaskSet 0.0: NODE_LOCAL, RACK_LOCAL, ANY
17/01/23 13:38:43 23996 DEBUG ContextCleaner: Got cleaning task CleanAccum(1)
17/01/23 13:38:43 23998 DEBUG ContextCleaner: Cleaning accumulator 1
17/01/23 13:38:43 23999 DEBUG DAGScheduler: submitStage(ShuffleMapStage 1)
17/01/23 13:38:43 24000 DEBUG DAGScheduler: missing: List()
17/01/23 13:38:43 24000 INFO ContextCleaner: Cleaned accumulator 1
17/01/23 13:38:43 24000 DEBUG ContextCleaner: Got cleaning task CleanAccum(0)
17/01/23 13:38:43 24000 DEBUG ContextCleaner: Cleaning accumulator 0
17/01/23 13:38:43 24000 INFO ContextCleaner: Cleaned accumulator 0
17/01/23 13:38:43 24000 INFO DAGScheduler: Submitting ShuffleMapStage 1 (MapPartitionsRDD[2] at save at <console>:38), which has no missing parents
17/01/23 13:38:43 24000 DEBUG DAGScheduler: submitMissingTasks(ShuffleMapStage 1)
17/01/23 13:38:43 24000 DEBUG DAGScheduler: [atr] number of partitions to compute 10
17/01/23 13:38:43 24000 DEBUG DAGScheduler: [atr] partitionID: 0 prefers Host (head:0) : flex16.zurich.ibm.com
17/01/23 13:38:43 24000 DEBUG DAGScheduler: [atr] partitionID: 1 prefers Host (head:0) : flex18.zurich.ibm.com
17/01/23 13:38:43 24003 DEBUG DAGScheduler: [atr] partitionID: 2 prefers Host (head:0) : flex14.zurich.ibm.com
17/01/23 13:38:43 24003 DEBUG DAGScheduler: [atr] partitionID: 3 prefers Host (head:0) : flex15.zurich.ibm.com
17/01/23 13:38:43 24003 DEBUG DAGScheduler: [atr] partitionID: 4 prefers Host (head:0) : flex19.zurich.ibm.com
17/01/23 13:38:43 24003 DEBUG DAGScheduler: [atr] partitionID: 5 prefers Host (head:0) : flex23.zurich.ibm.com
17/01/23 13:38:43 24003 DEBUG DAGScheduler: [atr] partitionID: 6 prefers Host (head:0) : flex21.zurich.ibm.com
17/01/23 13:38:43 24004 DEBUG DAGScheduler: [atr] partitionID: 7 prefers Host (head:0) : flex20.zurich.ibm.com
17/01/23 13:38:43 24004 DEBUG DAGScheduler: [atr] partitionID: 8 prefers Host (head:0) : flex13.zurich.ibm.com
17/01/23 13:38:43 24004 DEBUG DAGScheduler: [atr] partitionID: 9 prefers Host (head:0) : flex22.zurich.ibm.com
17/01/23 13:38:43 24004 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 0
17/01/23 13:38:43 24006 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 15.0 KB, free 21.2 GB)
17/01/23 13:38:43 24006 DEBUG BlockManager: Put block broadcast_3 locally took  0 ms
17/01/23 13:38:43 24006 DEBUG BlockManager: Putting block broadcast_3 without replication took  0 ms
17/01/23 13:38:43 24012 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 15.0 KB, free 21.2 GB)
17/01/23 13:38:43 24013 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.40.0.24:52005 (size: 15.0 KB, free: 21.2 GB)
17/01/23 13:38:43 24013 DEBUG BlockManagerMaster: Updated info of block broadcast_3_piece0
17/01/23 13:38:43 24013 DEBUG BlockManager: Told master about block broadcast_3_piece0
17/01/23 13:38:43 24013 DEBUG BlockManager: Put block broadcast_3_piece0 locally took  1 ms
17/01/23 13:38:43 24013 DEBUG BlockManager: Putting block broadcast_3_piece0 without replication took  1 ms
17/01/23 13:38:43 24014 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1004
17/01/23 13:38:43 24014 INFO DAGScheduler: Submitting 10 missing tasks from ShuffleMapStage 1 (MapPartitionsRDD[2] at save at <console>:38)
17/01/23 13:38:43 24014 DEBUG DAGScheduler: New pending partitions: Set(0, 9, 1, 5, 2, 6, 3, 7, 4, 8)
17/01/23 13:38:43 24014 INFO YarnScheduler: Adding task set 1.0 with 10 tasks
17/01/23 13:38:43 24023 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 0, flex13.zurich.ibm.com, executor 2, partition 9, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24026 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 1, flex19.zurich.ibm.com, executor 4, partition 6, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24027 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 2, flex15.zurich.ibm.com, executor 7, partition 4, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24027 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, flex20.zurich.ibm.com, executor 3, partition 3, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24028 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 4, flex18.zurich.ibm.com, executor 9, partition 8, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24028 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 5, flex22.zurich.ibm.com, executor 5, partition 0, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24029 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 6, flex23.zurich.ibm.com, executor 1, partition 2, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24029 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 7, flex21.zurich.ibm.com, executor 8, partition 5, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24030 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 8, flex16.zurich.ibm.com, executor 10, partition 1, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24031 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 9, flex14.zurich.ibm.com, executor 6, partition 7, NODE_LOCAL, 6598 bytes)
17/01/23 13:38:43 24031 DEBUG TaskSetManager: Epoch for TaskSet 1.0: 0
17/01/23 13:38:43 24031 DEBUG TaskSetManager: Valid locality levels for TaskSet 1.0: NODE_LOCAL, RACK_LOCAL, ANY
17/01/23 13:38:43 24033 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 0 on executor id: 2 hostname: flex13.zurich.ibm.com.
17/01/23 13:38:43 24035 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 1 on executor id: 4 hostname: flex19.zurich.ibm.com.
17/01/23 13:38:43 24035 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 2 on executor id: 7 hostname: flex15.zurich.ibm.com.
17/01/23 13:38:43 24036 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 3 on executor id: 3 hostname: flex20.zurich.ibm.com.
17/01/23 13:38:43 24036 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 4 on executor id: 9 hostname: flex18.zurich.ibm.com.
17/01/23 13:38:43 24036 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 5 on executor id: 5 hostname: flex22.zurich.ibm.com.
17/01/23 13:38:43 24037 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 6 on executor id: 1 hostname: flex23.zurich.ibm.com.
17/01/23 13:38:43 24037 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 7 on executor id: 8 hostname: flex21.zurich.ibm.com.
17/01/23 13:38:43 24038 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 8 on executor id: 10 hostname: flex16.zurich.ibm.com.
17/01/23 13:38:43 24038 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 9 on executor id: 6 hostname: flex14.zurich.ibm.com.
17/01/23 13:38:43 24039 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:43 24039 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:43 24039 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:43 24039 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24147 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24148 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24148 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24148 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24149 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24149 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24150 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24150 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24150 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24155 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24155 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24157 DEBUG BlockManager: Getting local block broadcast_2_piece0 as bytes
17/01/23 13:38:43 24157 DEBUG BlockManager: Level for block broadcast_2_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:43 24175 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex20.zurich.ibm.com:55917 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24175 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex14.zurich.ibm.com:51532 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24176 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex18.zurich.ibm.com:58157 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24176 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex22.zurich.ibm.com:57739 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24176 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex21.zurich.ibm.com:59637 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24176 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex19.zurich.ibm.com:58394 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24177 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex15.zurich.ibm.com:58202 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24177 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex23.zurich.ibm.com:51233 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24177 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex13.zurich.ibm.com:58332 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:43 24177 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on flex16.zurich.ibm.com:43222 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:44 24887 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:44 24887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:44 24895 DEBUG BlockManager: Getting local block broadcast_1_piece0 as bytes
17/01/23 13:38:44 24895 DEBUG BlockManager: Getting local block broadcast_1_piece0 as bytes
17/01/23 13:38:44 24895 DEBUG BlockManager: Level for block broadcast_1_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:44 24895 DEBUG BlockManager: Level for block broadcast_1_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:44 24899 DEBUG BlockManager: Getting local block broadcast_1_piece0 as bytes
17/01/23 13:38:44 24899 DEBUG BlockManager: Level for block broadcast_1_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:44 24899 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex22.zurich.ibm.com:57739 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24899 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex13.zurich.ibm.com:58332 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24900 DEBUG BlockManager: Getting local block broadcast_1_piece0 as bytes
17/01/23 13:38:44 24900 DEBUG BlockManager: Level for block broadcast_1_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:44 24900 DEBUG BlockManager: Getting local block broadcast_1_piece0 as bytes
17/01/23 13:38:44 24900 DEBUG BlockManager: Level for block broadcast_1_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:44 24903 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex16.zurich.ibm.com:43222 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24904 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex18.zurich.ibm.com:58157 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24904 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex21.zurich.ibm.com:59637 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24908 DEBUG BlockManager: Getting local block broadcast_1_piece0 as bytes
17/01/23 13:38:44 24908 DEBUG BlockManager: Level for block broadcast_1_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:44 24912 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex23.zurich.ibm.com:51233 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24922 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex14.zurich.ibm.com:51532 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24925 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex19.zurich.ibm.com:58394 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24926 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex20.zurich.ibm.com:55917 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24929 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on flex15.zurich.ibm.com:58202 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:44 24992 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #98
17/01/23 13:38:44 24993 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #98
17/01/23 13:38:44 24993 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:45 25887 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:45 25887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:45 25993 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #99
17/01/23 13:38:45 25994 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #99
17/01/23 13:38:45 25994 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:46 26888 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:46 26888 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:46 26994 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #100
17/01/23 13:38:46 26995 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #100
17/01/23 13:38:46 26995 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:47 27887 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:47 27887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:47 27995 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #101
17/01/23 13:38:47 27996 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #101
17/01/23 13:38:47 27996 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:48 28887 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:48 28887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:48 28996 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #102
17/01/23 13:38:48 28997 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #102
17/01/23 13:38:48 28997 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:49 29887 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:49 29887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:49 29998 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #103
17/01/23 13:38:49 29998 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #103
17/01/23 13:38:49 29998 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:50 30887 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 10
17/01/23 13:38:50 30888 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:50 30999 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #104
17/01/23 13:38:50 30999 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #104
17/01/23 13:38:50 30999 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:51 31768 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 9
17/01/23 13:38:51 31768 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:51 31768 DEBUG TaskSetManager: No tasks for locality level NODE_LOCAL, so moving to locality level RACK_LOCAL
17/01/23 13:38:51 31768 DEBUG TaskSetManager: No tasks for locality level RACK_LOCAL, so moving to locality level ANY
17/01/23 13:38:51 31771 DEBUG TaskSetManager: Moving to RACK_LOCAL after waiting for 3000ms
17/01/23 13:38:51 31771 DEBUG TaskSetManager: Moving to ANY after waiting for 3000ms
17/01/23 13:38:51 31772 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 10, flex21.zurich.ibm.com, executor 8, partition 6, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 31772 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 10 on executor id: 8 hostname: flex21.zurich.ibm.com.
17/01/23 13:38:51 31796 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 7) in 7766 ms on flex21.zurich.ibm.com (executor 8) (1/10)
17/01/23 13:38:51 31799 DEBUG DAGScheduler: ShuffleMapTask finished on 8
17/01/23 13:38:51 31807 DEBUG BlockManager: Getting local block broadcast_3_piece0 as bytes
17/01/23 13:38:51 31807 DEBUG BlockManager: Level for block broadcast_3_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 31811 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex21.zurich.ibm.com:59637 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 31830 DEBUG BlockManager: Getting local block broadcast_0_piece0 as bytes
17/01/23 13:38:51 31830 DEBUG BlockManager: Level for block broadcast_0_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 31833 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex21.zurich.ibm.com:59637 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 31880 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 8
17/01/23 13:38:51 31880 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 1
17/01/23 13:38:51 31880 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 11, flex14.zurich.ibm.com, executor 6, partition 2, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 31881 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 11 on executor id: 6 hostname: flex14.zurich.ibm.com.
17/01/23 13:38:51 31881 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 7
17/01/23 13:38:51 31882 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 2
17/01/23 13:38:51 31882 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 12, flex20.zurich.ibm.com, executor 3, partition 7, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 31882 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 12 on executor id: 3 hostname: flex20.zurich.ibm.com.
17/01/23 13:38:51 31884 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 9) in 7854 ms on flex14.zurich.ibm.com (executor 6) (2/10)
17/01/23 13:38:51 31885 DEBUG DAGScheduler: ShuffleMapTask finished on 6
17/01/23 13:38:51 31886 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 7859 ms on flex20.zurich.ibm.com (executor 3) (3/10)
17/01/23 13:38:51 31887 DEBUG DAGScheduler: ShuffleMapTask finished on 3
17/01/23 13:38:51 31888 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 7
17/01/23 13:38:51 31888 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 3
17/01/23 13:38:51 31918 DEBUG BlockManager: Getting local block broadcast_3_piece0 as bytes
17/01/23 13:38:51 31919 DEBUG BlockManager: Level for block broadcast_3_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 31921 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex14.zurich.ibm.com:51532 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 31932 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex20.zurich.ibm.com:55917 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 31940 DEBUG BlockManager: Getting local block broadcast_0_piece0 as bytes
17/01/23 13:38:51 31940 DEBUG BlockManager: Level for block broadcast_0_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 31944 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex14.zurich.ibm.com:51532 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 31951 DEBUG BlockManager: Getting local block broadcast_0_piece0 as bytes
17/01/23 13:38:51 31951 DEBUG BlockManager: Level for block broadcast_0_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 31953 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex20.zurich.ibm.com:55917 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 31967 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 6
17/01/23 13:38:51 31967 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 3
17/01/23 13:38:51 31968 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 13, flex22.zurich.ibm.com, executor 5, partition 9, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 31968 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 13 on executor id: 5 hostname: flex22.zurich.ibm.com.
17/01/23 13:38:51 31969 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 5
17/01/23 13:38:51 31969 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 4
17/01/23 13:38:51 31971 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 14, flex18.zurich.ibm.com, executor 9, partition 1, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 31971 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 14 on executor id: 9 hostname: flex18.zurich.ibm.com.
17/01/23 13:38:51 31971 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 4) in 7944 ms on flex18.zurich.ibm.com (executor 9) (4/10)
17/01/23 13:38:51 31972 DEBUG DAGScheduler: ShuffleMapTask finished on 9
17/01/23 13:38:51 31973 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 5) in 7945 ms on flex22.zurich.ibm.com (executor 5) (5/10)
17/01/23 13:38:51 31973 DEBUG DAGScheduler: ShuffleMapTask finished on 5
17/01/23 13:38:51 31987 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 4
17/01/23 13:38:51 31987 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 5
17/01/23 13:38:51 31988 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 15, flex23.zurich.ibm.com, executor 1, partition 5, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 31989 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 15 on executor id: 1 hostname: flex23.zurich.ibm.com.
17/01/23 13:38:51 31989 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 6) in 7961 ms on flex23.zurich.ibm.com (executor 1) (6/10)
17/01/23 13:38:51 31989 DEBUG DAGScheduler: ShuffleMapTask finished on 1
17/01/23 13:38:51 31999 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #105
17/01/23 13:38:51 32000 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #105
17/01/23 13:38:51 32000 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:51 32023 DEBUG BlockManager: Getting local block broadcast_3_piece0 as bytes
17/01/23 13:38:51 32023 DEBUG BlockManager: Level for block broadcast_3_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 32024 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex22.zurich.ibm.com:57739 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32026 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex23.zurich.ibm.com:51233 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32027 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex18.zurich.ibm.com:58157 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32052 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex18.zurich.ibm.com:58157 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 32053 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex23.zurich.ibm.com:51233 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 32065 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex22.zurich.ibm.com:57739 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 32127 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 3
17/01/23 13:38:51 32127 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 6
17/01/23 13:38:51 32128 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 16, flex16.zurich.ibm.com, executor 10, partition 0, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 32128 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 16 on executor id: 10 hostname: flex16.zurich.ibm.com.
17/01/23 13:38:51 32128 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 8) in 8098 ms on flex16.zurich.ibm.com (executor 10) (7/10)
17/01/23 13:38:51 32128 DEBUG DAGScheduler: ShuffleMapTask finished on 10
17/01/23 13:38:51 32132 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 2
17/01/23 13:38:51 32132 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 7
17/01/23 13:38:51 32133 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 17, flex15.zurich.ibm.com, executor 7, partition 3, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 32133 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 2) in 8107 ms on flex15.zurich.ibm.com (executor 7) (8/10)
17/01/23 13:38:51 32133 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 17 on executor id: 7 hostname: flex15.zurich.ibm.com.
17/01/23 13:38:51 32133 DEBUG DAGScheduler: ShuffleMapTask finished on 7
17/01/23 13:38:51 32164 DEBUG BlockManager: Getting local block broadcast_3_piece0 as bytes
17/01/23 13:38:51 32164 DEBUG BlockManager: Level for block broadcast_3_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 32165 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 1
17/01/23 13:38:51 32165 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 8
17/01/23 13:38:51 32166 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 18, flex13.zurich.ibm.com, executor 2, partition 8, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 32166 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 18 on executor id: 2 hostname: flex13.zurich.ibm.com.
17/01/23 13:38:51 32166 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 0) in 8160 ms on flex13.zurich.ibm.com (executor 2) (9/10)
17/01/23 13:38:51 32166 DEBUG DAGScheduler: ShuffleMapTask finished on 2
17/01/23 13:38:51 32168 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex16.zurich.ibm.com:43222 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32170 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex15.zurich.ibm.com:58202 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32193 DEBUG YarnScheduler: parentName: , name: TaskSet_0.0, runningTasks: 0
17/01/23 13:38:51 32193 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 9
17/01/23 13:38:51 32194 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex15.zurich.ibm.com:58202 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 32194 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 19, flex19.zurich.ibm.com, executor 4, partition 4, NODE_LOCAL, 6597 bytes)
17/01/23 13:38:51 32194 DEBUG YarnSchedulerBackend$YarnDriverEndpoint: Launching task 19 on executor id: 4 hostname: flex19.zurich.ibm.com.
17/01/23 13:38:51 32194 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 1) in 8169 ms on flex19.zurich.ibm.com (executor 4) (10/10)
17/01/23 13:38:51 32195 DEBUG DAGScheduler: ShuffleMapTask finished on 4
17/01/23 13:38:51 32196 INFO DAGScheduler: ShuffleMapStage 0 (save at <console>:38) finished in 8.196 s
17/01/23 13:38:51 32196 INFO YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
17/01/23 13:38:51 32197 INFO DAGScheduler: looking for newly runnable stages
17/01/23 13:38:51 32197 INFO DAGScheduler: running: Set(ShuffleMapStage 1)
17/01/23 13:38:51 32198 INFO DAGScheduler: waiting: Set(ResultStage 2)
17/01/23 13:38:51 32198 INFO DAGScheduler: failed: Set()
17/01/23 13:38:51 32199 DEBUG MapOutputTrackerMaster: Increasing epoch to 1
17/01/23 13:38:51 32202 DEBUG DAGScheduler: submitStage(ResultStage 2)
17/01/23 13:38:51 32202 DEBUG DAGScheduler: missing: List(ShuffleMapStage 1)
17/01/23 13:38:51 32202 DEBUG DAGScheduler: submitStage(ShuffleMapStage 1)
17/01/23 13:38:51 32203 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=24, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=44544
17/01/23 13:38:51 32203 DEBUG DFSClient: DFSClient writeChunk packet full seqno=24, src=/shared-spark-logs/application_1483626889488_0446.inprogress, bytesCurBlock=109568, blockSize=134217728, appendChunk=false
17/01/23 13:38:51 32203 DEBUG DFSClient: Queued packet 24
17/01/23 13:38:51 32203 DEBUG DFSClient: computePacketChunkSize: src=/shared-spark-logs/application_1483626889488_0446.inprogress, chunkSize=516, chunksPerPacket=127, packetSize=65532
17/01/23 13:38:51 32203 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=25, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=109568
17/01/23 13:38:51 32203 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:24 offsetInBlock:44544 lastPacketInBlock:false lastByteOffsetInBlock: 109568
17/01/23 13:38:51 32203 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 148577 lastFlushOffset 44622
17/01/23 13:38:51 32204 DEBUG DFSClient: Queued packet 25
17/01/23 13:38:51 32204 DEBUG DFSClient: Waiting for ack for: 25
17/01/23 13:38:51 32204 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:25 offsetInBlock:109568 lastPacketInBlock:false lastByteOffsetInBlock: 148577
17/01/23 13:38:51 32204 DEBUG DFSClient: DFSClient seqno: 24 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:51 32204 DEBUG DFSClient: DFSClient seqno: 25 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:51 32207 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex13.zurich.ibm.com:58332 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32207 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex16.zurich.ibm.com:43222 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 32228 DEBUG BlockManager: Getting local block broadcast_0_piece0 as bytes
17/01/23 13:38:51 32228 DEBUG BlockManager: Level for block broadcast_0_piece0 is StorageLevel(disk, memory, 1 replicas)
17/01/23 13:38:51 32230 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex13.zurich.ibm.com:58332 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:51 32232 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on flex19.zurich.ibm.com:58394 (size: 15.0 KB, free: 34.0 GB)
17/01/23 13:38:51 32256 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on flex19.zurich.ibm.com:58394 (size: 65.2 KB, free: 34.0 GB)
17/01/23 13:38:52 32665 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo: closed
17/01/23 13:38:52 32665 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo: stopped, remaining connections 1
17/01/23 13:38:52 32888 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 10
17/01/23 13:38:52 33001 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #106
17/01/23 13:38:52 33001 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #106
17/01/23 13:38:52 33001 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:53 33887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 10
17/01/23 13:38:53 34001 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #107
17/01/23 13:38:53 34002 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #107
17/01/23 13:38:53 34002 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:54 34888 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 10
17/01/23 13:38:54 35002 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #108
17/01/23 13:38:54 35003 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #108
17/01/23 13:38:54 35003 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:55 35887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 10
17/01/23 13:38:55 36003 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #109
17/01/23 13:38:55 36004 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #109
17/01/23 13:38:55 36004 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:56 36887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 10
17/01/23 13:38:56 37004 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #110
17/01/23 13:38:56 37004 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #110
17/01/23 13:38:56 37004 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 0ms
17/01/23 13:38:57 37425 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 9
17/01/23 13:38:57 37425 DEBUG TaskSetManager: No tasks for locality level NODE_LOCAL, so moving to locality level RACK_LOCAL
17/01/23 13:38:57 37425 DEBUG TaskSetManager: No tasks for locality level RACK_LOCAL, so moving to locality level ANY
17/01/23 13:38:57 37426 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 10) in 5655 ms on flex21.zurich.ibm.com (executor 8) (1/10)
17/01/23 13:38:57 37426 DEBUG DAGScheduler: ShuffleMapTask finished on 8
17/01/23 13:38:57 37672 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 8
17/01/23 13:38:57 37673 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 15) in 5686 ms on flex23.zurich.ibm.com (executor 1) (2/10)
17/01/23 13:38:57 37673 DEBUG DAGScheduler: ShuffleMapTask finished on 1
17/01/23 13:38:57 37711 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 7
17/01/23 13:38:57 37711 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 12) in 5829 ms on flex20.zurich.ibm.com (executor 3) (3/10)
17/01/23 13:38:57 37711 DEBUG DAGScheduler: ShuffleMapTask finished on 3
17/01/23 13:38:57 37751 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 6
17/01/23 13:38:57 37752 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 14) in 5781 ms on flex18.zurich.ibm.com (executor 9) (4/10)
17/01/23 13:38:57 37752 DEBUG DAGScheduler: ShuffleMapTask finished on 9
17/01/23 13:38:57 37831 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 5
17/01/23 13:38:57 37831 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 11) in 5951 ms on flex14.zurich.ibm.com (executor 6) (5/10)
17/01/23 13:38:57 37832 DEBUG DAGScheduler: ShuffleMapTask finished on 6
17/01/23 13:38:57 37887 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 5
17/01/23 13:38:57 37963 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 4
17/01/23 13:38:57 37964 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 16) in 5837 ms on flex16.zurich.ibm.com (executor 10) (6/10)
17/01/23 13:38:57 37964 DEBUG DAGScheduler: ShuffleMapTask finished on 10
17/01/23 13:38:57 37983 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 3
17/01/23 13:38:57 37983 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 13) in 6016 ms on flex22.zurich.ibm.com (executor 5) (7/10)
17/01/23 13:38:57 37983 DEBUG DAGScheduler: ShuffleMapTask finished on 5
17/01/23 13:38:57 38005 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #111
17/01/23 13:38:57 38005 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #111
17/01/23 13:38:57 38005 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 0ms
17/01/23 13:38:57 38047 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 2
17/01/23 13:38:57 38048 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 17) in 5916 ms on flex15.zurich.ibm.com (executor 7) (8/10)
17/01/23 13:38:57 38048 DEBUG DAGScheduler: ShuffleMapTask finished on 7
17/01/23 13:38:57 38068 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 1
17/01/23 13:38:57 38069 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 19) in 5876 ms on flex19.zurich.ibm.com (executor 4) (9/10)
17/01/23 13:38:57 38069 DEBUG DAGScheduler: ShuffleMapTask finished on 4
17/01/23 13:38:57 38217 DEBUG YarnScheduler: parentName: , name: TaskSet_1.0, runningTasks: 0
17/01/23 13:38:57 38217 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 18) in 6052 ms on flex13.zurich.ibm.com (executor 2) (10/10)
17/01/23 13:38:57 38217 INFO YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/01/23 13:38:57 38217 DEBUG DAGScheduler: ShuffleMapTask finished on 2
17/01/23 13:38:57 38218 INFO DAGScheduler: ShuffleMapStage 1 (save at <console>:38) finished in 14.186 s
17/01/23 13:38:57 38218 INFO DAGScheduler: looking for newly runnable stages
17/01/23 13:38:57 38219 INFO DAGScheduler: running: Set()
17/01/23 13:38:57 38219 INFO DAGScheduler: waiting: Set(ResultStage 2)
17/01/23 13:38:57 38219 INFO DAGScheduler: failed: Set()
17/01/23 13:38:57 38219 DEBUG MapOutputTrackerMaster: Increasing epoch to 2
17/01/23 13:38:57 38219 DEBUG DAGScheduler: submitStage(ResultStage 2)
17/01/23 13:38:57 38219 DEBUG DAGScheduler: missing: List()
17/01/23 13:38:57 38219 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[11] at save at <console>:38), which has no missing parents
17/01/23 13:38:57 38219 DEBUG DAGScheduler: submitMissingTasks(ResultStage 2)
17/01/23 13:38:57 38221 DEBUG DAGScheduler: [atr] number of partitions to compute 10
17/01/23 13:38:57 38221 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=26, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=148480
17/01/23 13:38:57 38222 DEBUG DFSClient: DFSClient writeChunk packet full seqno=26, src=/shared-spark-logs/application_1483626889488_0446.inprogress, bytesCurBlock=213504, blockSize=134217728, appendChunk=false
17/01/23 13:38:57 38222 DEBUG DFSClient: Queued packet 26
17/01/23 13:38:57 38222 DEBUG DFSClient: computePacketChunkSize: src=/shared-spark-logs/application_1483626889488_0446.inprogress, chunkSize=516, chunksPerPacket=127, packetSize=65532
17/01/23 13:38:57 38222 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=27, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=213504
17/01/23 13:38:57 38222 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:26 offsetInBlock:148480 lastPacketInBlock:false lastByteOffsetInBlock: 213504
17/01/23 13:38:57 38222 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 241126 lastFlushOffset 148577
17/01/23 13:38:57 38222 DEBUG DFSClient: Queued packet 27
17/01/23 13:38:57 38222 DEBUG DFSClient: Waiting for ack for: 27
17/01/23 13:38:57 38222 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:27 offsetInBlock:213504 lastPacketInBlock:false lastByteOffsetInBlock: 241126
17/01/23 13:38:57 38223 DEBUG DFSClient: DFSClient seqno: 26 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:57 38223 DEBUG DFSClient: DFSClient seqno: 27 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:57 38228 INFO YarnScheduler: Cancelling stage 2
17/01/23 13:38:57 38229 INFO DAGScheduler: ResultStage 2 (save at <console>:38) failed in Unknown s due to Job aborted due to stage failure: Task creation failed: java.util.NoSuchElementException: head of empty list
java.util.NoSuchElementException: head of empty list
	at scala.collection.immutable.Nil$.head(List.scala:420)
	at scala.collection.immutable.Nil$.head(List.scala:417)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
	at org.apache.spark.scheduler.DAGScheduler.logDebug(DAGScheduler.scala:114)
	at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:965)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:963)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:963)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

17/01/23 13:38:57 38230 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=28, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=240640
17/01/23 13:38:57 38230 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 250446 lastFlushOffset 241126
17/01/23 13:38:57 38230 DEBUG DFSClient: Queued packet 28
17/01/23 13:38:57 38230 DEBUG DFSClient: Waiting for ack for: 28
17/01/23 13:38:57 38230 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:28 offsetInBlock:240640 lastPacketInBlock:false lastByteOffsetInBlock: 250446
17/01/23 13:38:57 38231 DEBUG DFSClient: DFSClient seqno: 28 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:57 38233 DEBUG DAGScheduler: After removal of stage 2, remaining stages = 2
17/01/23 13:38:57 38233 DEBUG DAGScheduler: After removal of stage 1, remaining stages = 1
17/01/23 13:38:57 38233 DEBUG DAGScheduler: After removal of stage 0, remaining stages = 0
17/01/23 13:38:57 38234 INFO DAGScheduler: Job 0 failed: save at <console>:38, took 14.500397 s
17/01/23 13:38:57 38235 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task creation failed: java.util.NoSuchElementException: head of empty list
java.util.NoSuchElementException: head of empty list
	at scala.collection.immutable.Nil$.head(List.scala:420)
	at scala.collection.immutable.Nil$.head(List.scala:417)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
	at org.apache.spark.scheduler.DAGScheduler.logDebug(DAGScheduler.scala:114)
	at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:965)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:963)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:963)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1444)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1432)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1431)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1431)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:972)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:629)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1934)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:492)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
	at $line25.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.saveDF(<console>:38)
	at $line38.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:59)
	at $line38.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:64)
	at $line38.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:66)
	at $line38.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:68)
	at $line38.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:70)
	at $line38.$read$$iw$$iw$$iw$$iw$$iw.<init>(<console>:72)
	at $line38.$read$$iw$$iw$$iw$$iw.<init>(<console>:74)
	at $line38.$read$$iw$$iw$$iw.<init>(<console>:76)
	at $line38.$read$$iw$$iw.<init>(<console>:78)
	at $line38.$read$$iw.<init>(<console>:80)
	at $line38.$read.<init>(<console>:82)
	at $line38.$read$.<init>(<console>:86)
	at $line38.$read$.<clinit>(<console>)
	at $line38.$eval$.$print$lzycompute(<console>:7)
	at $line38.$eval$.$print(<console>:6)
	at $line38.$eval.$print(<console>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
	at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
	at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
	at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
	at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$5$$anonfun$apply$6.apply(ILoop.scala:427)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$5$$anonfun$apply$6.apply(ILoop.scala:423)
	at scala.reflect.io.Streamable$Chars$class.applyReader(Streamable.scala:111)
	at scala.reflect.io.File.applyReader(File.scala:50)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$5.apply(ILoop.scala:423)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$interpretAllFrom$1$$anonfun$apply$5.apply(ILoop.scala:423)
	at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$interpretAllFrom$1.apply(ILoop.scala:422)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$interpretAllFrom$1.apply(ILoop.scala:422)
	at scala.tools.nsc.interpreter.ILoop.savingReader(ILoop.scala:96)
	at scala.tools.nsc.interpreter.ILoop.interpretAllFrom(ILoop.scala:421)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$run$3$1.apply(ILoop.scala:577)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$run$3$1.apply(ILoop.scala:576)
	at scala.tools.nsc.interpreter.ILoop.withFile(ILoop.scala:570)
	at scala.tools.nsc.interpreter.ILoop.run$3(ILoop.scala:576)
	at scala.tools.nsc.interpreter.ILoop.loadCommand(ILoop.scala:583)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$standardCommands$8.apply(ILoop.scala:207)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$standardCommands$8.apply(ILoop.scala:207)
	at scala.tools.nsc.interpreter.LoopCommands$LineCmd.apply(LoopCommands.scala:62)
	at scala.tools.nsc.interpreter.ILoop.colonCommand(ILoop.scala:688)
	at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:679)
	at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
	at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
	at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
	at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
	at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
	at org.apache.spark.repl.Main$.doMain(Main.scala:68)
	at org.apache.spark.repl.Main$.main(Main.scala:51)
	at org.apache.spark.repl.Main.main(Main.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:728)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:202)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:116)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.util.NoSuchElementException: head of empty list
	at scala.collection.immutable.Nil$.head(List.scala:420)
	at scala.collection.immutable.Nil$.head(List.scala:417)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
	at org.apache.spark.scheduler.DAGScheduler.logDebug(DAGScheduler.scala:114)
	at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:965)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:963)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:963)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
17/01/23 13:38:57 38238 DEBUG Client: The ping interval is 60000 ms.
17/01/23 13:38:57 38238 DEBUG Client: Connecting to flex11-40g0/10.40.0.11:9000
17/01/23 13:38:57 38239 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #112
17/01/23 13:38:57 38240 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo: starting, having connections 2
17/01/23 13:38:57 38240 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #112
17/01/23 13:38:57 38240 DEBUG ProtobufRpcEngine: Call: delete took 2ms
17/01/23 13:38:57 38241 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #113
17/01/23 13:38:57 38241 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #113
17/01/23 13:38:57 38241 DEBUG ProtobufRpcEngine: Call: delete took 0ms
17/01/23 13:38:57 38244 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=29, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=250368
17/01/23 13:38:57 38244 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 269558 lastFlushOffset 250446
17/01/23 13:38:57 38244 DEBUG DFSClient: Queued packet 29
17/01/23 13:38:57 38244 DEBUG DFSClient: Waiting for ack for: 29
17/01/23 13:38:57 38244 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:29 offsetInBlock:250368 lastPacketInBlock:false lastByteOffsetInBlock: 269558
17/01/23 13:38:57 38244 DEBUG DFSClient: DFSClient seqno: 29 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:38:57 38250 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=30, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=269312
17/01/23 13:38:57 38250 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 269670 lastFlushOffset 269558
17/01/23 13:38:57 38250 DEBUG DFSClient: Queued packet 30
17/01/23 13:38:57 38250 DEBUG DFSClient: Waiting for ack for: 30
17/01/23 13:38:57 38250 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:30 offsetInBlock:269312 lastPacketInBlock:false lastByteOffsetInBlock: 269670
17/01/23 13:38:57 38251 DEBUG DFSClient: DFSClient seqno: 30 status: SUCCESS downstreamAckTimeNanos: 0
org.apache.spark.SparkException: Job aborted.
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:147)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:121)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:121)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
  at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:492)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)
  at saveDF(<console>:38)
  ... 73 elided
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task creation failed: java.util.NoSuchElementException: head of empty list
java.util.NoSuchElementException: head of empty list
	at scala.collection.immutable.Nil$.head(List.scala:420)
	at scala.collection.immutable.Nil$.head(List.scala:417)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
	at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
	at org.apache.spark.scheduler.DAGScheduler.logDebug(DAGScheduler.scala:114)
	at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1535)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:965)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:963)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:963)
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1444)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1432)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1431)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1431)
  at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:972)
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
  at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
  at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:629)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1934)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
  at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply$mcV$sp(FileFormatWriter.scala:127)
  ... 93 more
Caused by: java.util.NoSuchElementException: head of empty list
  at scala.collection.immutable.Nil$.head(List.scala:420)
  at scala.collection.immutable.Nil$.head(List.scala:417)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$getPreferredLocs$1.apply(DAGScheduler.scala:1535)
  at org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
  at org.apache.spark.scheduler.DAGScheduler.logDebug(DAGScheduler.scala:114)
  at org.apache.spark.scheduler.DAGScheduler.getPreferredLocs(DAGScheduler.scala:1535)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:965)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$16.apply(DAGScheduler.scala:963)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
  at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:963)
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:766)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitWaitingChildStages$6.apply(DAGScheduler.scala:765)
  at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
  at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
  at org.apache.spark.scheduler.DAGScheduler.submitWaitingChildStages(DAGScheduler.scala:765)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1237)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1658)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1616)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
e: Long = 1554494793078778
Total time: 19366 msec

scala> 17/01/23 13:38:58 38993 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #114
17/01/23 13:38:58 38994 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #114
17/01/23 13:38:58 38994 DEBUG ProtobufRpcEngine: Call: renewLease took 1ms
17/01/23 13:38:58 38996 DEBUG LeaseRenewer: Lease renewed for client DFSClient_NONMAPREDUCE_-909600239_1
17/01/23 13:38:58 38996 DEBUG LeaseRenewer: Lease renewer daemon for [DFSClient_NONMAPREDUCE_-909600239_1] with renew id 1 executed
17/01/23 13:38:58 39006 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #115
17/01/23 13:38:58 39007 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #115
17/01/23 13:38:58 39007 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:38:59 40007 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #116
17/01/23 13:38:59 40008 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #116
17/01/23 13:38:59 40008 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:39:00 41008 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #117
17/01/23 13:39:00 41009 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #117
17/01/23 13:39:00 41009 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:39:01 42009 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #118
17/01/23 13:39:01 42010 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #118
17/01/23 13:39:01 42010 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:39:02 43010 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo sending #119
17/01/23 13:39:02 43011 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:8032 from demo got value #119
17/01/23 13:39:02 43011 DEBUG ProtobufRpcEngine: Call: getApplicationReport took 1ms
17/01/23 13:39:03 43799 INFO SparkContext: Invoking stop() from shutdown hook
17/01/23 13:39:03 43801 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=31, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=269312
17/01/23 13:39:03 43801 DEBUG DFSClient: DFSClient flush() : bytesCurBlock 269736 lastFlushOffset 269670
17/01/23 13:39:03 43801 DEBUG DFSClient: Queued packet 31
17/01/23 13:39:03 43801 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.Server@19489b27
17/01/23 13:39:03 43801 DEBUG DFSClient: Waiting for ack for: 31
17/01/23 13:39:03 43802 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:31 offsetInBlock:269312 lastPacketInBlock:false lastByteOffsetInBlock: 269736
17/01/23 13:39:03 43805 DEBUG DFSClient: DFSClient seqno: 31 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:39:03 43805 DEBUG Server: Graceful shutdown org.spark_project.jetty.server.Server@19489b27 by 
17/01/23 13:39:03 43806 DEBUG AbstractLifeCycle: stopping ServerConnector@202898d7{HTTP/1.1}{0.0.0.0:4040}
17/01/23 13:39:03 43806 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.ServerConnector$ServerConnectorManager@6048e26a
17/01/23 13:39:03 43806 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@27dc627a keys=0 selected=0
17/01/23 13:39:03 43806 DEBUG SelectorManager: Stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@27dc627a keys=0 selected=0
17/01/23 13:39:03 43806 DEBUG SelectorManager: Queued change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@6f53f0fc
17/01/23 13:39:03 43807 DEBUG SelectorManager: Selector loop woken up from select, 0/0 selected
17/01/23 13:39:03 43807 DEBUG SelectorManager: Running change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@6f53f0fc
17/01/23 13:39:03 43807 DEBUG SelectorManager: Stopped org.spark_project.jetty.io.SelectorManager$ManagedSelector@27dc627a keys=-1 selected=-1
17/01/23 13:39:03 43807 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.io.SelectorManager$ManagedSelector@27dc627a keys=-1 selected=-1
17/01/23 13:39:03 43807 DEBUG SelectorManager: Stopped Thread[SparkUI-67-selector-ServerConnectorManager@6048e26a/0,5,main] on org.spark_project.jetty.io.SelectorManager$ManagedSelector@27dc627a keys=-1 selected=-1
17/01/23 13:39:03 43807 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@525b1b70 keys=0 selected=0
17/01/23 13:39:03 43807 DEBUG SelectorManager: Stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@525b1b70 keys=0 selected=0
17/01/23 13:39:03 43807 DEBUG SelectorManager: Queued change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@608a6fcf
17/01/23 13:39:03 43807 DEBUG SelectorManager: Selector loop woken up from select, 0/0 selected
17/01/23 13:39:03 43807 DEBUG SelectorManager: Running change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@608a6fcf
17/01/23 13:39:03 43807 DEBUG SelectorManager: Stopped org.spark_project.jetty.io.SelectorManager$ManagedSelector@525b1b70 keys=-1 selected=-1
17/01/23 13:39:03 43807 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.io.SelectorManager$ManagedSelector@525b1b70 keys=-1 selected=-1
17/01/23 13:39:03 43807 DEBUG SelectorManager: Stopped Thread[SparkUI-68-selector-ServerConnectorManager@6048e26a/1,5,main] on org.spark_project.jetty.io.SelectorManager$ManagedSelector@525b1b70 keys=-1 selected=-1
17/01/23 13:39:03 43807 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@16d07cf3 keys=0 selected=0
17/01/23 13:39:03 43807 DEBUG SelectorManager: Stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@16d07cf3 keys=0 selected=0
17/01/23 13:39:03 43807 DEBUG SelectorManager: Queued change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@299a8bb6
17/01/23 13:39:03 43808 DEBUG SelectorManager: Selector loop woken up from select, 0/0 selected
17/01/23 13:39:03 43808 DEBUG SelectorManager: Running change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@299a8bb6
17/01/23 13:39:03 43808 DEBUG SelectorManager: Stopped org.spark_project.jetty.io.SelectorManager$ManagedSelector@16d07cf3 keys=-1 selected=-1
17/01/23 13:39:03 43808 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.io.SelectorManager$ManagedSelector@16d07cf3 keys=-1 selected=-1
17/01/23 13:39:03 43808 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@16f0ec18 keys=0 selected=0
17/01/23 13:39:03 43808 DEBUG SelectorManager: Stopping org.spark_project.jetty.io.SelectorManager$ManagedSelector@16f0ec18 keys=0 selected=0
17/01/23 13:39:03 43808 DEBUG SelectorManager: Stopped Thread[SparkUI-69-selector-ServerConnectorManager@6048e26a/2,5,main] on org.spark_project.jetty.io.SelectorManager$ManagedSelector@16d07cf3 keys=-1 selected=-1
17/01/23 13:39:03 43808 DEBUG SelectorManager: Queued change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@37fd33e5
17/01/23 13:39:03 43809 DEBUG SelectorManager: Selector loop woken up from select, 0/0 selected
17/01/23 13:39:03 43809 DEBUG SelectorManager: Running change org.spark_project.jetty.io.SelectorManager$ManagedSelector$Stop@37fd33e5
17/01/23 13:39:03 43809 DEBUG SelectorManager: Stopped org.spark_project.jetty.io.SelectorManager$ManagedSelector@16f0ec18 keys=-1 selected=-1
17/01/23 13:39:03 43809 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.io.SelectorManager$ManagedSelector@16f0ec18 keys=-1 selected=-1
17/01/23 13:39:03 43809 DEBUG SelectorManager: Stopped Thread[SparkUI-70-selector-ServerConnectorManager@6048e26a/3,5,main] on org.spark_project.jetty.io.SelectorManager$ManagedSelector@16f0ec18 keys=-1 selected=-1
17/01/23 13:39:03 43809 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.server.ServerConnector$ServerConnectorManager@6048e26a
17/01/23 13:39:03 43809 DEBUG AbstractLifeCycle: stopping HttpConnectionFactory@5382184b{HTTP/1.1}
17/01/23 13:39:03 43809 DEBUG AbstractLifeCycle: STOPPED HttpConnectionFactory@5382184b{HTTP/1.1}
17/01/23 13:39:03 43809 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.util.thread.ScheduledExecutorScheduler@317890ea
17/01/23 13:39:03 43809 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.util.thread.ScheduledExecutorScheduler@317890ea
17/01/23 13:39:03 43810 INFO ServerConnector: Stopped ServerConnector@202898d7{HTTP/1.1}{0.0.0.0:4040}
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: STOPPED ServerConnector@202898d7{HTTP/1.1}{0.0.0.0:4040}
17/01/23 13:39:03 43810 DEBUG AbstractHandler: stopping org.spark_project.jetty.server.Server@19489b27
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.handler.ContextHandlerCollection@3249e278[org.spark_project.jetty.servlets.gzip.GzipHandler@6fe595dc, org.spark_project.jetty.servlets.gzip.GzipHandler@5d318e91, org.spark_project.jetty.servlets.gzip.GzipHandler@766a52f5, org.spark_project.jetty.servlets.gzip.GzipHandler@7d8cf9ac, org.spark_project.jetty.servlets.gzip.GzipHandler@2921a36a, org.spark_project.jetty.servlets.gzip.GzipHandler@5190010f, org.spark_project.jetty.servlets.gzip.GzipHandler@4998e74b, org.spark_project.jetty.servlets.gzip.GzipHandler@433d9680, org.spark_project.jetty.servlets.gzip.GzipHandler@30bf26df, org.spark_project.jetty.servlets.gzip.GzipHandler@704067c6, org.spark_project.jetty.servlets.gzip.GzipHandler@2b08772d, org.spark_project.jetty.servlets.gzip.GzipHandler@159424e2, org.spark_project.jetty.servlets.gzip.GzipHandler@3b24087d, org.spark_project.jetty.servlets.gzip.GzipHandler@469a7575, org.spark_project.jetty.servlets.gzip.GzipHandler@5042e3d0, org.spark_project.jetty.servlets.gzip.GzipHandler@637791d, org.spark_project.jetty.servlets.gzip.GzipHandler@18b6d3c1, org.spark_project.jetty.servlets.gzip.GzipHandler@11d86b9d, org.spark_project.jetty.servlets.gzip.GzipHandler@800d065, org.spark_project.jetty.servlets.gzip.GzipHandler@5a0e0886, org.spark_project.jetty.servlets.gzip.GzipHandler@27f3f512, org.spark_project.jetty.servlets.gzip.GzipHandler@2ac519dc, org.spark_project.jetty.servlets.gzip.GzipHandler@3cc79c02, org.spark_project.jetty.servlets.gzip.GzipHandler@6b4125ed, org.spark_project.jetty.servlets.gzip.GzipHandler@33d60b7e, o.s.j.s.ServletContextHandler@7254838{/metrics/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@16df9bde{/SQL,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@44aeae34{/SQL/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@5560b64d{/SQL/execution,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@61ad30f6{/SQL/execution/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@76db9048{/static/sql,null,SHUTDOWN}]
17/01/23 13:39:03 43810 DEBUG AbstractHandler: stopping org.spark_project.jetty.server.handler.ContextHandlerCollection@3249e278[org.spark_project.jetty.servlets.gzip.GzipHandler@6fe595dc, org.spark_project.jetty.servlets.gzip.GzipHandler@5d318e91, org.spark_project.jetty.servlets.gzip.GzipHandler@766a52f5, org.spark_project.jetty.servlets.gzip.GzipHandler@7d8cf9ac, org.spark_project.jetty.servlets.gzip.GzipHandler@2921a36a, org.spark_project.jetty.servlets.gzip.GzipHandler@5190010f, org.spark_project.jetty.servlets.gzip.GzipHandler@4998e74b, org.spark_project.jetty.servlets.gzip.GzipHandler@433d9680, org.spark_project.jetty.servlets.gzip.GzipHandler@30bf26df, org.spark_project.jetty.servlets.gzip.GzipHandler@704067c6, org.spark_project.jetty.servlets.gzip.GzipHandler@2b08772d, org.spark_project.jetty.servlets.gzip.GzipHandler@159424e2, org.spark_project.jetty.servlets.gzip.GzipHandler@3b24087d, org.spark_project.jetty.servlets.gzip.GzipHandler@469a7575, org.spark_project.jetty.servlets.gzip.GzipHandler@5042e3d0, org.spark_project.jetty.servlets.gzip.GzipHandler@637791d, org.spark_project.jetty.servlets.gzip.GzipHandler@18b6d3c1, org.spark_project.jetty.servlets.gzip.GzipHandler@11d86b9d, org.spark_project.jetty.servlets.gzip.GzipHandler@800d065, org.spark_project.jetty.servlets.gzip.GzipHandler@5a0e0886, org.spark_project.jetty.servlets.gzip.GzipHandler@27f3f512, org.spark_project.jetty.servlets.gzip.GzipHandler@2ac519dc, org.spark_project.jetty.servlets.gzip.GzipHandler@3cc79c02, org.spark_project.jetty.servlets.gzip.GzipHandler@6b4125ed, org.spark_project.jetty.servlets.gzip.GzipHandler@33d60b7e, o.s.j.s.ServletContextHandler@7254838{/metrics/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@16df9bde{/SQL,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@44aeae34{/SQL/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@5560b64d{/SQL/execution,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@61ad30f6{/SQL/execution/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@76db9048{/static/sql,null,SHUTDOWN}]
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@33d60b7e
17/01/23 13:39:03 43810 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@33d60b7e
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@6f5d0190{/stages/stage/kill,null,SHUTDOWN}
17/01/23 13:39:03 43810 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@6f5d0190{/stages/stage/kill,null,UNAVAILABLE}
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@67332b1e
17/01/23 13:39:03 43810 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@67332b1e
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$4-7e34b127@ff7edc0e==org.apache.spark.ui.JettyUtils$$anon$4,-1,true
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$4-7e34b127@ff7edc0e==org.apache.spark.ui.JettyUtils$$anon$4,-1,true
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: stopping org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter-502a6ab9
17/01/23 13:39:03 43810 DEBUG AbstractLifeCycle: STOPPED org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter-502a6ab9
17/01/23 13:39:03 43811 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@67332b1e
17/01/23 13:39:03 43811 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@6f5d0190{/stages/stage/kill,null,UNAVAILABLE}
17/01/23 13:39:03 43811 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@6f5d0190{/stages/stage/kill,null,UNAVAILABLE}
17/01/23 13:39:03 43811 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@33d60b7e
17/01/23 13:39:03 43811 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@6b4125ed
17/01/23 13:39:03 43811 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@6b4125ed
17/01/23 13:39:03 43811 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@1980a3f{/jobs/job/kill,null,SHUTDOWN}
17/01/23 13:39:03 43811 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@1980a3f{/jobs/job/kill,null,UNAVAILABLE}
17/01/23 13:39:03 43811 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@67f63d26
17/01/23 13:39:03 43811 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@67f63d26
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$4-536b71b4@e2582c81==org.apache.spark.ui.JettyUtils$$anon$4,-1,true
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$4-536b71b4@e2582c81==org.apache.spark.ui.JettyUtils$$anon$4,-1,true
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@67f63d26
17/01/23 13:39:03 43812 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1980a3f{/jobs/job/kill,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@1980a3f{/jobs/job/kill,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@6b4125ed
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@3cc79c02
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@3cc79c02
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@61ff6a49{/api,null,SHUTDOWN}
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@61ff6a49{/api,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@18dd5ed3
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@18dd5ed3
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler$Default404Servlet-5f61371d@87c6fd87==org.spark_project.jetty.servlet.ServletHandler$Default404Servlet,-1,false
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler$Default404Servlet-5f61371d@87c6fd87==org.spark_project.jetty.servlet.ServletHandler$Default404Servlet,-1,false
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.glassfish.jersey.servlet.ServletContainer-5f8d9767@d37fc22a==org.glassfish.jersey.servlet.ServletContainer,-1,false
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.glassfish.jersey.servlet.ServletContainer-5f8d9767@d37fc22a==org.glassfish.jersey.servlet.ServletContainer,-1,false
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@18dd5ed3
17/01/23 13:39:03 43812 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@61ff6a49{/api,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@61ff6a49{/api,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@3cc79c02
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@2ac519dc
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@2ac519dc
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@15d0d6c9{/,null,SHUTDOWN}
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@15d0d6c9{/,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@3ab35b9c
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@3ab35b9c
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$4-7741d346@7fabb8f9==org.apache.spark.ui.JettyUtils$$anon$4,-1,true
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$4-7741d346@7fabb8f9==org.apache.spark.ui.JettyUtils$$anon$4,-1,true
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@3ab35b9c
17/01/23 13:39:03 43812 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@15d0d6c9{/,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@15d0d6c9{/,null,UNAVAILABLE}
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@2ac519dc
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@27f3f512
17/01/23 13:39:03 43812 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@27f3f512
17/01/23 13:39:03 43812 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@61ffd148{/static,null,SHUTDOWN}
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@61ffd148{/static,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@58324c9f
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@58324c9f
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.DefaultServlet-2251b3bc@188bcafd==org.spark_project.jetty.servlet.DefaultServlet,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.DefaultServlet-2251b3bc@188bcafd==org.spark_project.jetty.servlet.DefaultServlet,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@58324c9f
17/01/23 13:39:03 43813 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@61ffd148{/static,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@61ffd148{/static,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@27f3f512
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5a0e0886
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5a0e0886
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@2785db06{/executors/threadDump/json,null,SHUTDOWN}
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@2785db06{/executors/threadDump/json,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@79980d8d
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@79980d8d
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-35d60381@d4288e5c==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-35d60381@d4288e5c==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@79980d8d
17/01/23 13:39:03 43813 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2785db06{/executors/threadDump/json,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@2785db06{/executors/threadDump/json,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@5a0e0886
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@800d065
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@800d065
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@1cc8416a{/executors/threadDump,null,SHUTDOWN}
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@1cc8416a{/executors/threadDump,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@331ff3ac
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@331ff3ac
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-2e5e6fc4@9d88810==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-2e5e6fc4@9d88810==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@331ff3ac
17/01/23 13:39:03 43813 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1cc8416a{/executors/threadDump,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@1cc8416a{/executors/threadDump,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@800d065
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@11d86b9d
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@11d86b9d
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@7741771e{/executors/json,null,SHUTDOWN}
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@7741771e{/executors/json,null,UNAVAILABLE}
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@834e986
17/01/23 13:39:03 43813 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@834e986
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-6cae2e4d@8aa425a0==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-6cae2e4d@8aa425a0==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43813 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@834e986
17/01/23 13:39:03 43814 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7741771e{/executors/json,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@7741771e{/executors/json,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@11d86b9d
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@18b6d3c1
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@18b6d3c1
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@115dcaea{/executors,null,SHUTDOWN}
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@115dcaea{/executors,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@cfd1075
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@cfd1075
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-45117dd@f6dce78a==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-45117dd@f6dce78a==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@cfd1075
17/01/23 13:39:03 43814 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@115dcaea{/executors,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@115dcaea{/executors,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@18b6d3c1
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@637791d
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@637791d
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@996a546{/environment/json,null,SHUTDOWN}
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@996a546{/environment/json,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@4fc165f6
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@4fc165f6
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-545b5ed0@213ff8b4==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-545b5ed0@213ff8b4==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@4fc165f6
17/01/23 13:39:03 43814 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@996a546{/environment/json,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@996a546{/environment/json,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@637791d
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5042e3d0
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5042e3d0
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@5c134052{/environment,null,SHUTDOWN}
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@5c134052{/environment,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@69de5bed
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@69de5bed
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-750f64fe@1d986d7d==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-750f64fe@1d986d7d==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@69de5bed
17/01/23 13:39:03 43814 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5c134052{/environment,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@5c134052{/environment,null,UNAVAILABLE}
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@5042e3d0
17/01/23 13:39:03 43814 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@469a7575
17/01/23 13:39:03 43814 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@469a7575
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@41f4039e{/storage/rdd/json,null,SHUTDOWN}
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@41f4039e{/storage/rdd/json,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@5ff00507
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@5ff00507
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-603cabc4@b24093c8==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-603cabc4@b24093c8==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@5ff00507
17/01/23 13:39:03 43815 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@41f4039e{/storage/rdd/json,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@41f4039e{/storage/rdd/json,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@469a7575
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@3b24087d
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@3b24087d
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@46fb0c33{/storage/rdd,null,SHUTDOWN}
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@46fb0c33{/storage/rdd,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@3b009e7b
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@3b009e7b
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-270a620@9081d292==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-270a620@9081d292==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@3b009e7b
17/01/23 13:39:03 43815 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@46fb0c33{/storage/rdd,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@46fb0c33{/storage/rdd,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@3b24087d
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@159424e2
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@159424e2
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@738d37fc{/storage/json,null,SHUTDOWN}
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@738d37fc{/storage/json,null,UNAVAILABLE}
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@6fa2448b
17/01/23 13:39:03 43815 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@6fa2448b
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-61bb1e4d@3736b0ab==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-61bb1e4d@3736b0ab==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43815 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@6fa2448b
17/01/23 13:39:03 43816 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@738d37fc{/storage/json,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@738d37fc{/storage/json,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@159424e2
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@2b08772d
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@2b08772d
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@34451ed8{/storage,null,SHUTDOWN}
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@34451ed8{/storage,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@c1050f2
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@c1050f2
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-67bb4dcd@749d733e==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-67bb4dcd@749d733e==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@c1050f2
17/01/23 13:39:03 43816 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@34451ed8{/storage,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@34451ed8{/storage,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@2b08772d
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@704067c6
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@704067c6
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@31973858{/stages/pool/json,null,SHUTDOWN}
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@31973858{/stages/pool/json,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@65514add
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@65514add
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-79e7188e@4909a654==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-79e7188e@4909a654==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@65514add
17/01/23 13:39:03 43816 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@31973858{/stages/pool/json,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@31973858{/stages/pool/json,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@704067c6
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@30bf26df
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@30bf26df
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@7364eed1{/stages/pool,null,SHUTDOWN}
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@7364eed1{/stages/pool,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@684e8c9d
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@684e8c9d
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-6ecc02bb@f7bc7bbd==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-6ecc02bb@f7bc7bbd==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@684e8c9d
17/01/23 13:39:03 43816 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7364eed1{/stages/pool,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@7364eed1{/stages/pool,null,UNAVAILABLE}
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@30bf26df
17/01/23 13:39:03 43816 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@433d9680
17/01/23 13:39:03 43816 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@433d9680
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@39dec536{/stages/stage/json,null,SHUTDOWN}
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@39dec536{/stages/stage/json,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@4a1a256d
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@4a1a256d
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-4cb957b8@ba2b502a==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-4cb957b8@ba2b502a==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@4a1a256d
17/01/23 13:39:03 43817 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@39dec536{/stages/stage/json,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@39dec536{/stages/stage/json,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@433d9680
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@4998e74b
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@4998e74b
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@41b64020{/stages/stage,null,SHUTDOWN}
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@41b64020{/stages/stage,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@1a538ed8
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@1a538ed8
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-78910096@c8b91042==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-78910096@c8b91042==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@1a538ed8
17/01/23 13:39:03 43817 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@41b64020{/stages/stage,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@41b64020{/stages/stage,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@4998e74b
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5190010f
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5190010f
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@15fb4566{/stages/json,null,SHUTDOWN}
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@15fb4566{/stages/json,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@25ffd826
17/01/23 13:39:03 43817 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@25ffd826
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-29896529@f4f81aba==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-29896529@f4f81aba==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@25ffd826
17/01/23 13:39:03 43817 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@15fb4566{/stages/json,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@15fb4566{/stages/json,null,UNAVAILABLE}
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@5190010f
17/01/23 13:39:03 43817 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@2921a36a
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@2921a36a
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@606a1bc4{/stages,null,SHUTDOWN}
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@606a1bc4{/stages,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@6a15b73
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@6a15b73
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-44dc7b7d@9a180c3==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-44dc7b7d@9a180c3==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@6a15b73
17/01/23 13:39:03 43818 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@606a1bc4{/stages,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@606a1bc4{/stages,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@2921a36a
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@7d8cf9ac
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@7d8cf9ac
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@65ef48f2{/jobs/job/json,null,SHUTDOWN}
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@65ef48f2{/jobs/job/json,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@36068727
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@36068727
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-72543547@84ac000b==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-72543547@84ac000b==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@36068727
17/01/23 13:39:03 43818 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@65ef48f2{/jobs/job/json,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@65ef48f2{/jobs/job/json,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@7d8cf9ac
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@766a52f5
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@766a52f5
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@6436e181{/jobs/job,null,SHUTDOWN}
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@6436e181{/jobs/job,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@7186b202
17/01/23 13:39:03 43818 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@7186b202
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-6b649efa@9ae3c9d==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-6b649efa@9ae3c9d==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@7186b202
17/01/23 13:39:03 43818 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@6436e181{/jobs/job,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@6436e181{/jobs/job,null,UNAVAILABLE}
17/01/23 13:39:03 43818 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@766a52f5
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5d318e91
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@5d318e91
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@52559a69{/jobs/json,null,SHUTDOWN}
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@52559a69{/jobs/json,null,UNAVAILABLE}
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@285583d4
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@285583d4
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-1039bfc4@a88b5ba6==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-1039bfc4@a88b5ba6==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@285583d4
17/01/23 13:39:03 43819 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@52559a69{/jobs/json,null,UNAVAILABLE}
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@52559a69{/jobs/json,null,UNAVAILABLE}
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@5d318e91
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@6fe595dc
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlets.gzip.GzipHandler@6fe595dc
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping o.s.j.s.ServletContextHandler@c02670f{/jobs,null,SHUTDOWN}
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping o.s.j.s.ServletContextHandler@c02670f{/jobs,null,UNAVAILABLE}
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.servlet.ServletHandler@71179b6f
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping org.spark_project.jetty.servlet.ServletHandler@71179b6f
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.apache.spark.ui.JettyUtils$$anon$3-627ff1b8@230c4118==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.apache.spark.ui.JettyUtils$$anon$3-627ff1b8@230c4118==org.apache.spark.ui.JettyUtils$$anon$3,-1,true
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlet.ServletHandler@71179b6f
17/01/23 13:39:03 43819 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@c02670f{/jobs,null,UNAVAILABLE}
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED o.s.j.s.ServletContextHandler@c02670f{/jobs,null,UNAVAILABLE}
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.servlets.gzip.GzipHandler@6fe595dc
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.server.handler.ContextHandlerCollection@3249e278[org.spark_project.jetty.servlets.gzip.GzipHandler@6fe595dc, org.spark_project.jetty.servlets.gzip.GzipHandler@5d318e91, org.spark_project.jetty.servlets.gzip.GzipHandler@766a52f5, org.spark_project.jetty.servlets.gzip.GzipHandler@7d8cf9ac, org.spark_project.jetty.servlets.gzip.GzipHandler@2921a36a, org.spark_project.jetty.servlets.gzip.GzipHandler@5190010f, org.spark_project.jetty.servlets.gzip.GzipHandler@4998e74b, org.spark_project.jetty.servlets.gzip.GzipHandler@433d9680, org.spark_project.jetty.servlets.gzip.GzipHandler@30bf26df, org.spark_project.jetty.servlets.gzip.GzipHandler@704067c6, org.spark_project.jetty.servlets.gzip.GzipHandler@2b08772d, org.spark_project.jetty.servlets.gzip.GzipHandler@159424e2, org.spark_project.jetty.servlets.gzip.GzipHandler@3b24087d, org.spark_project.jetty.servlets.gzip.GzipHandler@469a7575, org.spark_project.jetty.servlets.gzip.GzipHandler@5042e3d0, org.spark_project.jetty.servlets.gzip.GzipHandler@637791d, org.spark_project.jetty.servlets.gzip.GzipHandler@18b6d3c1, org.spark_project.jetty.servlets.gzip.GzipHandler@11d86b9d, org.spark_project.jetty.servlets.gzip.GzipHandler@800d065, org.spark_project.jetty.servlets.gzip.GzipHandler@5a0e0886, org.spark_project.jetty.servlets.gzip.GzipHandler@27f3f512, org.spark_project.jetty.servlets.gzip.GzipHandler@2ac519dc, org.spark_project.jetty.servlets.gzip.GzipHandler@3cc79c02, org.spark_project.jetty.servlets.gzip.GzipHandler@6b4125ed, org.spark_project.jetty.servlets.gzip.GzipHandler@33d60b7e, o.s.j.s.ServletContextHandler@7254838{/metrics/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@16df9bde{/SQL,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@44aeae34{/SQL/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@5560b64d{/SQL/execution,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@61ad30f6{/SQL/execution/json,null,SHUTDOWN}, o.s.j.s.ServletContextHandler@76db9048{/static/sql,null,SHUTDOWN}]
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping org.spark_project.jetty.server.handler.ErrorHandler@2047981
17/01/23 13:39:03 43819 DEBUG AbstractHandler: stopping org.spark_project.jetty.server.handler.ErrorHandler@2047981
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.server.handler.ErrorHandler@2047981
17/01/23 13:39:03 43819 DEBUG AbstractLifeCycle: stopping SparkUI{STARTED,8<=8<=200,i=8,q=0}
17/01/23 13:39:03 43820 DEBUG AbstractLifeCycle: STOPPED SparkUI{STOPPED,8<=8<=200,i=0,q=0}
17/01/23 13:39:03 43821 DEBUG AbstractLifeCycle: STOPPED org.spark_project.jetty.server.Server@19489b27
17/01/23 13:39:03 43821 INFO SparkUI: Stopped Spark web UI at http://10.40.0.24:4040
17/01/23 13:39:03 43825 DEBUG DFSClient: DFSClient writeChunk allocating new packet seqno=32, src=/shared-spark-logs/application_1483626889488_0446.inprogress, packetSize=65532, chunksPerPacket=127, bytesCurBlock=269312
17/01/23 13:39:03 43826 DEBUG DFSClient: Queued packet 32
17/01/23 13:39:03 43826 DEBUG DFSClient: Queued packet 33
17/01/23 13:39:03 43826 DEBUG DFSClient: Waiting for ack for: 33
17/01/23 13:39:03 43826 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:32 offsetInBlock:269312 lastPacketInBlock:false lastByteOffsetInBlock: 269736
17/01/23 13:39:03 43826 DEBUG DFSClient: DFSClient seqno: 32 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:39:03 43826 DEBUG DFSClient: DataStreamer block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779 sending packet packet seqno:33 offsetInBlock:269736 lastPacketInBlock:true lastByteOffsetInBlock: 269736
17/01/23 13:39:03 43828 DEBUG DFSClient: DFSClient seqno: 33 status: SUCCESS downstreamAckTimeNanos: 0
17/01/23 13:39:03 43828 DEBUG DFSClient: Closing old block BP-2068108911-10.40.0.11-1475495593264:blk_1073963313_222779
17/01/23 13:39:03 43828 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #120
17/01/23 13:39:03 43829 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #120
17/01/23 13:39:03 43829 DEBUG ProtobufRpcEngine: Call: complete took 1ms
17/01/23 13:39:03 43830 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #121
17/01/23 13:39:03 43830 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #121
17/01/23 13:39:03 43830 DEBUG ProtobufRpcEngine: Call: getFileInfo took 0ms
17/01/23 13:39:03 43832 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #122
17/01/23 13:39:03 43833 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #122
17/01/23 13:39:03 43833 DEBUG ProtobufRpcEngine: Call: rename took 1ms
17/01/23 13:39:03 43836 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo sending #123
17/01/23 13:39:03 43836 DEBUG Client: IPC Client (1726490536) connection to flex11-40g0/10.40.0.11:9000 from demo got value #123
17/01/23 13:39:03 43837 DEBUG ProtobufRpcEngine: Call: setTimes took 1ms
17/01/23 13:39:03 43840 INFO YarnClientSchedulerBackend: Interrupting monitor thread
17/01/23 13:39:03 43852 INFO YarnClientSchedulerBackend: Shutting down all executors
17/01/23 13:39:03 43853 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
17/01/23 13:39:03 43856 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
17/01/23 13:39:03 43857 DEBUG AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state STOPPED
17/01/23 13:39:03 43857 DEBUG Client: stopping client from cache: org.apache.hadoop.ipc.Client@7c7e73c5
17/01/23 13:39:03 43858 INFO YarnClientSchedulerBackend: Stopped
17/01/23 13:39:03 43864 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/01/23 13:39:03 43876 INFO MemoryStore: MemoryStore cleared
17/01/23 13:39:03 43877 INFO BlockManager: BlockManager stopped
17/01/23 13:39:03 43877 INFO BlockManagerMaster: BlockManagerMaster stopped
17/01/23 13:39:03 43880 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/01/23 13:39:03 43883 INFO SparkContext: Successfully stopped SparkContext
17/01/23 13:39:03 43884 INFO ShutdownHookManager: Shutdown hook called
17/01/23 13:39:03 43885 INFO ShutdownHookManager: Deleting directory /tmp/spark-c0d2b32c-12b9-4085-9194-58e29b41f592
17/01/23 13:39:03 43914 INFO ShutdownHookManager: Deleting directory /tmp/spark-c0d2b32c-12b9-4085-9194-58e29b41f592/repl-7b5d13cd-e9d8-4a3b-8709-25e4502c8126
17/01/23 13:39:03 43915 DEBUG Client: stopping client from cache: org.apache.hadoop.ipc.Client@7c7e73c5
demo@flex24:~/zac-deployment/spark-2.0.0$ vim conf/log4j.properties
demo@flex24:~/zac-deployment/spark-2.0.0$ 
Clone this wiki locally