Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Twitter analysis is broken; see also: https://github.com/json4s/json4s/issues/496 #197

Closed
ruebot opened this issue Apr 16, 2018 · 3 comments
Labels

Comments

@ruebot
Copy link
Member

ruebot commented Apr 16, 2018

If you try and use the Twitter analysis functionality, it fails with the error pasted below.

The output looks to be what is described in here. Somewhere along the lines of updating our dependencies, this functionality broke.

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.1
      /_/
         
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_161)
Type in expressions to have them evaluated.
Type :help for more information.

scala> :paste
// Entering paste mode (ctrl-D to finish)

import io.archivesunleashed._
import io.archivesunleashed.matchbox._
import io.archivesunleashed.util._

val tweets = RecordLoader.loadTweets("/home/nruest/Dropbox/donald_search_2018_02_01.json.gz", sc)

tweets.count()

// Exiting paste mode, now interpreting.

2018-04-13 09:50:24,559 [Executor task launch worker for task 0] ERROR Executor - Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse$default$3()Z
	at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
	at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1760)
	at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
	at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
2018-04-13 09:50:24,598 [task-result-getter-0] WARN  TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse$default$3()Z
	at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
	at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1760)
	at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
	at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

2018-04-13 09:50:24,602 [task-result-getter-0] ERROR TaskSetManager - Task 0 in stage 0.0 failed 1 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse$default$3()Z
	at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
	at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
	at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
	at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1760)
	at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
	at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
  at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
  ... 53 elided
Caused by: java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse$default$3()Z
  at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
  at io.archivesunleashed.package$RecordLoader$$anonfun$loadTweets$2.apply(package.scala:60)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
  at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1760)
  at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
  at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1158)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1951)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:99)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)

scala> 
@ruebot ruebot added the bug label Apr 16, 2018
@ruebot
Copy link
Member Author

ruebot commented Apr 26, 2018

If you need Twitter analysis, you'll need to use 0.10.0.

Example usage from that release:

import io.archivesunleashed.spark.matchbox._
import io.archivesunleashed.spark.matchbox.TweetUtils._
import io.archivesunleashed.spark.rdd.RecordRDD._

val tweets = RecordLoader.loadTweets("/home/nruest/Dropbox/donald_search_2018_02_01.json.gz", sc)
tweets.count()

// Exiting paste mode, now interpreting.

import io.archivesunleashed.spark.matchbox._
import io.archivesunleashed.spark.matchbox.TweetUtils._
import io.archivesunleashed.spark.rdd.RecordRDD._
tweets: org.apache.spark.rdd.RDD[org.json4s.JValue] = MapPartitionsRDD[4] at filter at RecordLoader.scala:50
res0: Long = 860861

@lintool
Copy link
Member

lintool commented Apr 26, 2018

Found another potential lead: json4s/json4s#316

tl;dr - try downgrading to json4 3.2.11

@ruebot
Copy link
Member Author

ruebot commented Apr 27, 2018

Resolved with 44b58a7

@ruebot ruebot closed this as completed Apr 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants