I am running built in sample example which comes as part of Spark installation, and running in Hadoop 2.7 + Spark with JDK 8. However it is giving me the following error:
Exception in thread "main" java.lang.OutOfMemoryError: Cannot allocate new DoublePointer(10000000): totalBytes = 363M, physicalBytes = 911M
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.OutOfMemoryError: Physical memory usage is too high: physicalBytes = 911M > maxPhysicalBytes = 911M at org.bytedeco.javacpp.Pointer.deallocator(Pointer.java:572) at org.bytedeco.javacpp.Pointer.init(Pointer.java:121)
I followed the following SO question as well, and did the configuration changes.
In addition to this, I referred to these links as well: YARN-4714 , HADOOP-11090
Are there any issues in running Spark in JDK 8.
The below are the versions of the softwares that I am running in my simple cluster:
jdk-8u131-linux-x64
scala-2.12.2
spark-2.1.1-bin-without-hadoop
hadoop-2.7.0
One thing when I run the program in JDK 7, it is working fine, but failing with JDK 8.
Has anyone encountered this problem and if so, what is the fix? Isn't hadoop , spark, scala not yet compatible with JDK 8?
Can anyone please help me?