20

I have opened an AWS EMR cluster and in pyspark3 jupyter notebook I run this code:

"..
textRdd = sparkDF.select(textColName).rdd.flatMap(lambda x: x)
textRdd.collect().show()
.."

I got this error:

An error was encountered:
Invalid status code '400' from http://..../sessions/4/statements/7 with error payload: {"msg":"requirement failed: Session isn't active."}

Running the line:

sparkDF.show()

works!

I also created a small subset of the file and all my code runs fine.

What is the problem?

anat
  • 705
  • 2
  • 7
  • 20
  • 1
    Wait for a while the notebook creates a session to the EMR or restart kernel. just timeout I think – Lamanus Sep 23 '19 at 12:53
  • the cluster is open for two hours now, how long do I need to wait? why I don't need to wait for the small subset? – anat Sep 23 '19 at 12:58
  • Not cluster but your notebook. Check the application log for your EMR that the livy session by notebook is working well. – Lamanus Sep 23 '19 at 13:01
  • how do I check that? – anat Sep 23 '19 at 13:08
  • Your EMR console > application history and find livy-session-xx for numbering xx like 1, 2, ... – Lamanus Sep 23 '19 at 13:09
  • I see an incomplete livy session, what I need to do? – anat Sep 23 '19 at 13:14
  • check the applications from the EMR cli, I mean ssh to the master and `yarn application -list`. If there is a livy session (by matching the application id with EMR console), then kill it by `yarn application -kill application_id`. – Lamanus Sep 23 '19 at 13:29
  • did that, run my code, got the same error. what can i do? – anat Sep 23 '19 at 13:44

6 Answers6

21

I had the same issue and the reason for the timeout is the driver running out of memory. Since you run collect() all the data gets sent to the driver. By default the driver memory is 1000M when creating a spark application through JupyterHub even if you set a higher value through config.json. You can see that by executing the code from within a jupyter notebook

spark.sparkContext.getConf().get('spark.driver.memory')
1000M

To increase the driver memory just do

%%configure -f 
{"driverMemory": "6000M"}

This will restart the application with increased driver memory. You might need to use higher values for your data. Hope it helps.

Koba
  • 1,514
  • 4
  • 27
  • 48
  • 2
    IMO the driver could have died for any number of reasons. However, the `%%configure -f` command will restart it, regardless. – ijoseph Jan 24 '20 at 22:18
  • For me worked: %%configure -f {"spark.driver.memory": "6000M"} and not "driverMemory" – Conso May 05 '23 at 10:52
8

From This stack overflow question's answer which worked for me

Judging by the output, if your application is not finishing with a FAILED status, that sounds like a Livy timeout error: your application is likely taking longer than the defined timeout for a Livy session (which defaults to 1h), so even despite the Spark app succeeds your notebook will receive this error if the app takes longer than the Livy session's timeout.

If that's the case, here's how to address it:

1. edit the /etc/livy/conf/livy.conf file (in the cluster's master node)
2. set the livy.server.session.timeout to a higher value, like 8h (or larger, depending on your app)
3. restart Livy to update the setting: sudo restart livy-server in the cluster's master
4. test your code again

Alternative way to edit this setting - https://allinonescript.com/questions/54220381/how-to-set-livy-server-session-timeout-on-emr-cluster-boostrap

  • 1
    Thanks for the suggestion. I found the config for Livy timeout that can be passed to the EMR cluster as a JSON file here: https://stackoverflow.com/a/54240619/4306852 – Shashank Dec 01 '20 at 10:39
1

Just a restart helped solve this problem for me. On your Jupyter Notebook, go to -->Kernel-->>Restart Once done, if you run the cell with "spark" command you will see that a new spark session gets established.

0

You might get some insights from this similar Stack Overflow thread: Timeout error: Error with 400 StatusCode: "requirement failed: Session isn't active."

Solution might be to increase spark.executor.heartbeatInterval. Default is 10 seconds.

See EMR's official documentation on how to change Spark defaults:

You change the defaults in spark-defaults.conf using the spark-defaults configuration classification or the maximizeResourceAllocation setting in the spark configuration classification.

Fabio Manzano
  • 2,847
  • 1
  • 11
  • 23
  • 1
    Thank you, I'll try it and let you know. – anat Oct 03 '19 at 07:45
  • I tried increasing the heartbeatInterval up to 110s and it didnt solve the issue for me. The livy session would get always disconnected. I am running the code in a jupyter notebook, however, if I run the same code with `spark-submit` it works no problem. – Koba Oct 10 '19 at 05:33
0

Insufficient reputation to comment.

I tried increasing heartbeat Interval to a much higher (100 seconds), still the same result. FWIW, the error shows up in < 9s.

0

What worked for me is adding {"Classification": "spark-defaults", "Properties": {"spark.driver.memory": "20G"}} to the EMR configuration.

Elhanan Mishraky
  • 2,736
  • 24
  • 26