0

I am trying to compile and run WordCOunt program for Scala using command line without any Maven and sbt support. The command that I am using to compile the scala program is

scalac -classpath /spark-2.3.0-bin-hadoop2.7/jars/ Wordcount.scala

import org.apache.spark._
import org.apache.spark.SparkConf

/** Create a RDD of lines from a text file, and keep count of
 *  how often each word appears.
 */
object wordcount {

  def main(args: Array[String]) {
      // Set up a SparkContext named WordCount that runs locally using
      // all available cores.
      val conf = new SparkConf().setAppName("WordCount")
      conf.setMaster("local[*]")
      val sc = new SparkContext(conf)

MY RESEARCH: I have referred to the source code and found that the import statements are in their required jars.
For example SparkConf is present in package org.apache.spark which is mentioned in the program.

https://github.com/apache/spark/blob/v2.3.1/core/src/main/scala/org/apache/spark/SparkConf.scala

ERRORS I AM FACING :

Wordcount.scala:3: error: **object apache is not a member of package org import org.apache.spark._ ^**

Wordcount.scala:4: error: **object apache is not a member of package org import org.apache.spark.SparkConf** ^

Wordcount.scala:14: error: not found: **type SparkConf val conf = new SparkConf().setAppName("WordCount")** ^

Wordcount.scala:16: error: not found: **type SparkContext val sc = new SparkContext(conf)** ^

four errors found

Ramesh Maharjan
  • 41,071
  • 6
  • 69
  • 97
LearneriOS
  • 309
  • 6
  • 18
  • The path is correct. I have verified this on my side many times using ls -la – LearneriOS Jul 18 '18 at 03:00
  • @RameshMaharjan this is my source code https://gist.github.com/uutkarshsingh/b6c7694cf09507c3df1ec066093d6687 Is it possible for you to verifiy on your side whether you are getting the same error. using the appropriate classpath on your side. – LearneriOS Jul 18 '18 at 03:03
  • I am following a video tutorial for this and it compiles the code using scala IDE. and executes using IDE itself. Do you want to say that the IDE creates and runs a jar rather than only compilation and running? spark-submit maybe one of the way to do it.. – LearneriOS Jul 18 '18 at 03:43

1 Answers1

2

Try this:

scalac -classpath "/spark-2.3.0-bin-hadoop2.7/jars/*" Wordcount.scala

There was a problem with scalac command mentioned in your question. If you want to select all jars from a certain directory and put it in classpath then you need to use * wildcard character and wrap you path inside double quotes.

please refer: Including all the jars in a directory within the Java classpath for details

nomadSK25
  • 2,350
  • 3
  • 25
  • 36