0

I am new to Spark, and I'm using Scala 2.12.8 with Spark 2.4.0. I'm trying to use the Random Forest classifier in Spark MLLib. I can build and train the classifier, and the classifier can predict if I use the first() function on the resulting RDD. However, if I try to use the take(n) function, I get a pretty big, ugly stack trace. Does anyone know what I'm doing wrong? The error is occurring in the line: ".take(3)". I am aware that this is the first effectful operation that I'm performing on the RDD, so if anyone can explain to me why it's failing and how to fix it, I would be really grateful.

object ItsABreeze {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession
      .builder()
      .appName("test")
      .getOrCreate()

    //Do stuff to file
    val data: RDD[LabeledPoint] = MLUtils.loadLibSVMFile(spark.sparkContext, "file.svm")

    // Split the data into training and test sets (30% held out for testing)
    val splits: Array[RDD[LabeledPoint]] = data.randomSplit(Array(0.7, 0.3))
    val (trainingData, testData) = (splits(0), splits(1))

    // Train a RandomForest model.
    // Empty categoricalFeaturesInfo indicates all features are continuous
    val numClasses = 4
    val categoricaFeaturesInfo = Map[Int, Int]()
    val numTrees = 3
    val featureSubsetStrategy = "auto"
    val impurity = "gini"
    val maxDepth = 5
    val maxBins = 32

    val model: RandomForestModel = RandomForest.trainClassifier(
      trainingData,
      numClasses,
      categoricaFeaturesInfo,
      numTrees,
      featureSubsetStrategy,
      impurity,
      maxDepth,
      maxBins
    )

    testData
      .map((point: LabeledPoint) => model.predict(point.features))
      .take(3)
      .foreach(println)

    spark.stop()
  }
}

The top portion of the stack trace follows:

java.io.IOException: unexpected exception type
    at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1736)
    at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1266)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2078)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.lang.invoke.SerializedLambda.readResolve(SerializedLambda.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1260)
    ... 25 more
Caused by: java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: scala/runtime/LambdaDeserialize
    at ItsABreeze$.$deserializeLambda$(ItsABreeze.scala)
    ... 35 more
Caused by: java.lang.NoClassDefFoundError: scala/runtime/LambdaDeserialize
    ... 36 more
Caused by: java.lang.ClassNotFoundException: scala.runtime.LambdaDeserialize
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  • 1
    This is very likely a Scala version issue. See the following - https://stackoverflow.com/questions/47172122/classnotfoundexception-scala-runtime-lambdadeserialize-when-spark-submit – DemetriKots Feb 26 '19 at 21:01
  • Possible duplicate of [Resolving dependency problems in Apache Spark](https://stackoverflow.com/questions/41383460/resolving-dependency-problems-in-apache-spark) – 10465355 Feb 26 '19 at 21:25
  • @DemetriKots was correct, it was a versioning issue. I scrapped the project and rebuilt it using Scala 2.11.12 with the associated resolvers, and the code ran just as it was. Thank you for your help, and my apologies for being slow on responding. This is the first question I've posted to SO, I believe. – The_Mad_Geometer Mar 07 '19 at 23:34

1 Answers1

1

The code that I was trying to run was a slightly modified version of the classification example on this page (from the Spark Machine Learning Library documentation).

Both commenters on my original question were correct: I changed the version of Scala that I was using from 2.12.8 to 2.11.12, and I reverted Spark to 2.2.1, and the code ran just as it was.

For anyone watching this issue that is qualified to answer, here is a followup question: Spark 2.4.0 claims to have new, experimental support for Scala 2.12.x. Are there a lot of known issues with the 2.12.x support?