27

I have been of late trying out apache spark. My question is more specific to trigger spark jobs. Here I had posted question on understanding spark jobs. After getting dirty on jobs I moved on to my requirement.

I have a REST end point where I expose API to trigger Jobs, I have used Spring4.0 for Rest Implementation. Now going ahead I thought of implementing Jobs as Service in Spring where I would submit Job programmatically, meaning when the endpoint is triggered, with given parameters I would trigger the job. I have now few design options.

  • Similar to the below written job, I need to maintain several Jobs called by a Abstract Class may be JobScheduler .

     /*Can this Code be abstracted from the application and written as 
      as a seperate job. Because my understanding is that the 
     Application code itself has to have the addJars embedded 
     which internally  sparkContext takes care.*/
    
     SparkConf sparkConf = new SparkConf().setAppName("MyApp").setJars(
     new String[] { "/path/to/jar/submit/cluster" })
     .setMaster("/url/of/master/node");
      sparkConf.setSparkHome("/path/to/spark/");
    
            sparkConf.set("spark.scheduler.mode", "FAIR");
            JavaSparkContext sc = new JavaSparkContext(sparkConf);
            sc.setLocalProperty("spark.scheduler.pool", "test");
    
        // Application with Algorithm , transformations
    
  • extending above point have multiple versions of jobs handled by service.

  • Or else use an Spark Job Server to do this.

Firstly, I would like to know what is the best solution in this case, execution wise and also scaling wise.

Note : I am using a standalone cluster from spark. kindly help.

Community
  • 1
  • 1
chaosguru
  • 1,933
  • 4
  • 30
  • 44
  • I added the Spring for Apache Hadoop tag to this question. Spring Batch Admin provides a REST API for managing and launching jobs and I believe Spring for Apache Hadoop provides the ability to launch Spark jobs from Spring Batch... – Michael Minella Mar 11 '15 at 18:25
  • @MichaelMinella : thank you for the suggestion, I will definitely look into it. – chaosguru Mar 12 '15 at 05:08

5 Answers5

27

It turns out Spark has a hidden REST API to submit a job, check status and kill.

Check out full example here: http://arturmkrtchyan.com/apache-spark-hidden-rest-api

Artur Mkrtchyan
  • 921
  • 9
  • 9
  • 2
    Sounds really interesting, found this https://issues.apache.org/jira/secure/attachment/12696651/stable-spark-submit-in-standalone-mode-2-4-15.pdf so its means spark itself has now exposed this feature? – chaosguru Oct 12 '15 at 09:09
  • Afaik they have added it from v1.4. But they are not yet publicly promoting yet. – Artur Mkrtchyan Oct 30 '15 at 18:27
  • @ArturMkrtchyan relly interesting option, thank you! What happens if I will submit two applications simultaneously through Spark REST API? – VB_ May 16 '17 at 09:40
  • 2
    The webpage you have linked does not really tell anything because pictures on the page are dead. – styrofoam fly Mar 19 '18 at 13:59
  • 3
    This one might help while the main link provided has broken pictures: https://gist.github.com/arturmkrtchyan/5d8559b2911ac951d34a – Arsinux Jul 16 '18 at 11:26
  • does it launch a spark session/context every-time there is a call to the rest api, or it uses the same session ? – user3123372 Feb 12 '19 at 13:07
7

Just use the Spark JobServer https://github.com/spark-jobserver/spark-jobserver

There are a lot of things to consider with making a service, and the Spark JobServer has most of them covered already. If you find things that aren't good enough, it should be easy to make a request and add code to their system rather than reinventing it from scratch

David
  • 3,251
  • 18
  • 28
5

Livy is an open source REST interface for interacting with Apache Spark from anywhere. It supports executing snippets of code or programs in a Spark context that runs locally or in Apache Hadoop YARN.

Josemy
  • 810
  • 1
  • 12
  • 29
  • 1
    While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/low-quality-posts/19157438) – AJPerez Mar 19 '18 at 13:41
  • You are right, I have updated my answer giving a little bit more details. Thanks. – Josemy Mar 25 '18 at 09:41
  • Livy release cycle is weird. They release almost like once a year! – user1870400 Jan 12 '19 at 16:45
1

Here is a good client that you might find helpful: https://github.com/ywilkof/spark-jobs-rest-client

Edit: this answer was given in 2015. There are options like Livy available now.

Alex Fedulov
  • 1,442
  • 17
  • 26
  • don't you know whether it's possible to laucnh two applications simulateneously through that client? – VB_ May 16 '17 at 09:44
  • Yes, it's possible. The client is just a wrapper around HTTP calls to your spark master. So if your setup can handle that then it will be possible. – Yonatan Wilkof Aug 24 '17 at 06:17
0

Even I had this requirement I could do it using Livy Server, as one of the contributor Josemy mentioned. Following are the steps I took, hope it helps somebody:

Download livy zip from https://livy.apache.org/download/
Follow instructions:  https://livy.apache.org/get-started/


Upload the zip to a client.
Unzip the file
Check for the following two parameters if doesn't exists, create with right path
export SPARK_HOME=/opt/spark
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop

Enable 8998 port on the client

Update $LIVY_HOME/conf/livy.conf with master details any other stuff needed
Note: Template are there in $LIVY_HOME/conf
Eg. livy.file.local-dir-whitelist = /home/folder-where-the-jar-will-be-kept/


Run the server
$LIVY_HOME/bin/livy-server start

Stop the server
$LIVY_HOME/bin/livy-server stop

UI: <client-ip>:8998/ui/

Submitting job:POST : http://<your client ip goes here>:8998/batches
{
  "className" :  "<ur class name will come here with package name>",
  "file"  : "your jar location",
  "args" : ["arg1", "arg2", "arg3" ]

}