0

I have made a function in Scala Spark which looks like this.

def prepareSequences(data: RDD[String], splitChar: Char = '\t') = {
    val x = data.map(line => {
    val Array(id, se, offset, hour) = line.split(splitChar)
    (id + "-" + se,
    Step(offset = if (offset == "NULL") {
    -5
    } else {
    offset.toInt
    },
    hour = hour.toInt))
    })

    val y = x.groupBy(_._1)}

I need the groupBy but as soon as I add it, I am getting an error. The error is asking for Lzocodec.

        Exception in thread "main" java.lang.RuntimeException: Error in configuring object
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
    at org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:188)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.Partitioner$$anonfun$defaultPartitioner$2.apply(Partitioner.scala:66)
    at org.apache.spark.Partitioner$$anonfun$defaultPartitioner$2.apply(Partitioner.scala:66)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:285)
    at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:66)
    at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:687)
    at org.apache.spark.rdd.RDD$$anonfun$groupBy$1.apply(RDD.scala:687)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.groupBy(RDD.scala:686)
    at com.savagebeast.mypackage.DataPreprocessing$.prepareSequences(DataPreprocessing.scala:42)
    at com.savagebeast.mypackage.activity_mapper$.main(activity_mapper.scala:31)
    at com.savagebeast.mypackage.activity_mapper.main(activity_mapper.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
    ... 44 more
    Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.
    at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:139)
    at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:180)
    at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)
    ... 49 more
    Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
    at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:132)
    ... 51 more

I installed lzo and other required things following this Class com.hadoop.compression.lzo.LzoCodec not found for Spark on CDH 5?

Am I missing something?

UPDATE: Found solution.

Partitioning the RDD like this resolved the problem.

val y = x.groupByKey(50)

50 is the number of partitions I want for the RDD. It can be any number.

However, I am not sure why this worked. Will appreciate if someone could explain.

UPDATE-2: The following worked more sensibly and is stable so far.

I copied hadoop-lzo-0.4.21-SNAPSHOT.jar from /Users/<username>/hadoop-lzo/target to /usr/local/Cellar/apache-spark/2.1.0/libexec/jars. Essentially copying the jar to spark's classpath.

inferno
  • 390
  • 1
  • 4
  • 15

1 Answers1

1

No. It is not required by groupBy. If you take a look at the stack trace (kudos for posting it) you'll see it fails somewhere in input format:

at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)

This suggests that your input is compressed. It fails when you call groupBy, because this the point where Spark has to decided on the number of partitions, and touch the input.

In practice - yes, it seems you need lzo codec to execute your job.

  • thanks! Is `data.map` causing the compression? Am new to Scala. Earlier I did a similar operation on different problem and it worked fine. I'm not sure which part here is causing the compression, and if I can fix it for `groupBy` to work. – inferno Jan 31 '18 at 16:11
  • I could resolve it (added it in the question). Would you know why this worked? Thanks – inferno Jan 31 '18 at 19:58