1

I'am new to spark, and I use KMeans algorithm to cluster a data set, which size is 484M, 213104 dimensions, and my code as follow:

val k = args(0).toInt
val maxIter = args(1).toInt
val model = new KMeans().setK(k).setMaxIterations(maxIter).setEpsilon(1e-1).run(trainingData)
val modelRDD = sc.makeRDD(model.clusterCenters)
val saveModelPath = "/home/work/kMeansModel_" + args(0)
if(Files.exists(Paths.get(saveModelPath))) {
  FileUtils.deleteDirectory(new File(saveModelPath))
}
modelRDD.saveAsTextFile(saveModelPath)
val loss = model.computeCost(trainingData)
println("Within Set Sum of Squared Errors = " + loss)

when I set K = 150, it works, but when I set K = 300, it throws java.lang.OutOfMemoryError: Java heap space exception. My configuration:

--executor-memory 30G --driver-memory 4G --conf spark.shuffle.spill=false --conf spark.storage.memoryFraction=0.1
zero323
  • 322,348
  • 103
  • 959
  • 935
ifloating
  • 23
  • 3

1 Answers1

0

You should tell us more about the environment. Are you running in a real cluster, or in local mode?

Since you said you are new to Spark, I assume you are just playing around on your local machine. In this case, I think this post can help you.

Update

Your error is not really OOM, but heap space exception. Did you cache your RDD?

David S.
  • 10,578
  • 12
  • 62
  • 104
  • I deploy spark on only one machine, and the worker's configuration as follow: `export SPARK_WORKER_CORES=11 export SPARK_WORKER_MEMORY=40g`, and I running application with `bin/spark-submit --class "userClustering" --master spark://md-machinelearning0-bgp0.hy01:7077 --executor-memory 30G --driver-memory 4G --conf spark.shuffle.spill=false --conf spark.storage.memoryFraction=0.1 /home/work/downloadDetail.jar` – ifloating Apr 30 '15 at 06:12
  • yes, I cached the RDD originally and then I try no caching it, but the issue is still. The `spark.storage.memoryFraction=0.1`, executor memogry 30G and the data set size 484M, so the memory used to cache is enough – ifloating May 05 '15 at 03:21