1

I am trying out pyspark on windows by following this article. The article asks to run pyspark from command prompt.

When I run pyspark in command prompt, it starts jupyter notebook. But in example, it seems that running pyspark in command line starts interactive command line python shell. Why is this so?

The article next asks to read readme file:

>>>textFile = spark.read.text("README.md")

I tried running that in jupyter notebook. But I got following error:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
D:\Sanjeev\Programs\spark\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:

D:\Sanjeev\Programs\spark\spark-2.3.0-bin-hadoop2.7\python\lib\py4j-0.10.6-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    319                     "An error occurred while calling {0}{1}{2}.\n".
--> 320                     format(target_id, ".", name), value)
    321             else:

Py4JJavaError: An error occurred while calling o30.text.
: org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: ExitCodeException exitCode=-1073741515: ;
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
    at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
    at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
    at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
    at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
    at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
    at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
    at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
    at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
    at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
    at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
    at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:428)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:233)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
    at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:691)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
    at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:180)
    at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:114)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:385)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
    at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
    at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
    ... 31 more
Caused by: ExitCodeException exitCode=-1073741515: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
    at org.apache.hadoop.util.Shell.run(Shell.java:479)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:491)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509)
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
    at org.apache.hadoop.hive.ql.exec.Utilities.createDirsWithPermission(Utilities.java:3679)
    at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:597)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
    ... 46 more


During handling of the above exception, another exception occurred:

AnalysisException                         Traceback (most recent call last)
<ipython-input-1-9f78f301aea9> in <module>()
----> 1 textFile = spark.read.text("README.md")

D:\Sanjeev\Programs\spark\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\readwriter.py in text(self, paths, wholetext)
    326         if isinstance(paths, basestring):
    327             paths = [paths]
--> 328         return self._df(self._jreader.text(self._spark._sc._jvm.PythonUtils.toSeq(paths)))
    329 
    330     @since(2.0)

D:\Sanjeev\Programs\spark\spark-2.3.0-bin-hadoop2.7\python\lib\py4j-0.10.6-src.zip\py4j\java_gateway.py in __call__(self, *args)
   1158         answer = self.gateway_client.send_command(command)
   1159         return_value = get_return_value(
-> 1160             answer, self.gateway_client, self.target_id, self.name)
   1161 
   1162         for temp_arg in temp_args:

D:\Sanjeev\Programs\spark\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: 'java.lang.RuntimeException: ExitCodeException exitCode=-1073741515: ;'

I am finding absolutely no explanation in that stack trace. Earlier I got stack trace with last line AnalysisException: 'Path does not exist: file:/C:/Users/Mahesha999/README.md;' which hinted me that I should be running pyspark from %SPARK_HOME% instead of user directory. But above I am not getting any such hint from stack trace. What am missing?

Mahesha999
  • 22,693
  • 29
  • 116
  • 189
  • you must have environment variable pointing to your jupyter installation – Ramesh Maharjan Mar 21 '18 at 11:35
  • Please go through this link. Check if this helps you : https://stackoverflow.com/questions/45947375/why-does-starting-a-streaming-query-lead-to-exitcodeexception-exitcode-1073741 – Shrinivas Deshmukh Mar 21 '18 at 11:45
  • @RameshMaharjan I am having `PYSPARK_DRIVER_PYTHON` set to `jupyter` and `PYSPARK_DRIVER_PYTHON_OPTS` set to `notebook`. I tried pyspark installation by following some other online tut. But that went wrong. But I forgot to clear these environment variables. Is it because of these environment variables? – Mahesha999 Mar 21 '18 at 12:10
  • yes thats right. just change them or just disable them by commenting and you should be fine – Ramesh Maharjan Mar 21 '18 at 12:57
  • @RameshMaharjan yeah that worked but the actual issue remained. Now I am getting error on python shell. I have asked [another question](https://stackoverflow.com/questions/49406977) explaining the issue. Can you have a look at it? The question gives error I get when I run scala code in scala shell, but I have checked that I am getting the same error with pyspark python shell too. – Mahesha999 Mar 22 '18 at 07:30
  • you should answer it below then the steps that you followed to solve this issue – Ramesh Maharjan Mar 22 '18 at 07:34
  • I didnt get you. I have removed those environment variables to make pyspark command open python shell in command prompt instead of opening jupyter notebook. But now I am getting same (that I was getting in notebook) error in python shell. – Mahesha999 Mar 22 '18 at 07:38
  • Installing Microsoft Visual C++ 2010 x64 Redistributable (vcredist_x64.exe from [here](https://www.microsoft.com/en-us/download/details.aspx?id=14632)) resolved the issue as specified [here](https://stackoverflow.com/a/47992503/1317018). – Mahesha999 Mar 23 '18 at 10:27

0 Answers0