0

I am interested in testing Spark running on Mesos. I created a Hadoop 2.6.0 single-node cluster in my Virtualbox and installed Spark on it. I can successfully process files in HDFS using Spark.

Then I installed Mesos Master and Slave on the same node. I tried to run Spark as a framework in Mesos using these instructions. I get the following error with Spark:

WARN TaskSchedulerImpl: Initial job has not accepted any resources;
check your cluster UI to ensure that workers are registered and have sufficient resources

Sparkshell is successfully registered as a framework in the Mesos. Is there anything wrong with using a single-node setup? Or whether I need to add more Spark worker nodes?

I am very new to Spark and my aim is to just test Spark, HDFS, and Mesos.

chrisaycock
  • 36,470
  • 14
  • 88
  • 125
vathan Lal
  • 133
  • 4
  • 12
  • Why do you want to use Mesos in first place for spark, when hadoop already comes with yarn. – Abhishek Anand Apr 12 '16 at 17:57
  • I already have a OpenStack Cluster with Mesos and different frame works.. I need a shared hdfs file system in that environment with frame work to process the files in the hdfs.. Right now iam just testing spark on mesos.. – vathan Lal Apr 13 '16 at 09:33
  • how many CPUs and memory did you give to the virtualbox instance? – hbogert Jun 15 '16 at 22:13

1 Answers1

0

If you have allocated enough resources for spark slaves, the cause might be firewall blocking the communication. Take a look at my other answer:

Apache Spark on Mesos: Initial job has not accepted any resources

Fontaine007
  • 577
  • 2
  • 8
  • 18