2

When I ran pyspark.SparkContext('loc', 'pyspark_rec'), an error was raised saying it could not parse master URL. As a beginner in spark programming, I am not quite sure what that means. But as far as my code is concerned, I am not using any deployment modules (YARN, Hadoop, etc), but test the code in the standalone mode. So assigning the URL to 'loc' is I believe fine. But can somebody explain to me how I should fix the issue? Thank you.

Below is the error code.

File "recommender.py", line 112, in spark_recommendations
    sc = pyspark.SparkContext('loc', 'pyspark_rec')



File "/Users/chlee021690/Desktop/Programming/spark/python/pyspark/context.py", line 134, in __init__
    self._jsc = self._initialize_context(self._conf._jconf)

  File "/Users/chlee021690/Desktop/Programming/spark/python/pyspark/context.py", line 180, in _initialize_context
    return self._jvm.JavaSparkContext(jconf)

  File "/Users/chlee021690/anaconda/lib/python2.7/site-packages/py4j/java_gateway.py", line 701, in __call__
    self._fqn)

  File "/Users/chlee021690/anaconda/lib/python2.7/site-packages/py4j/protocol.py", line 300, in get_return_value
    format(target_id, '.', name), value)

Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Could not parse Master URL: 'loc'
    at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1564)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:307)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:214)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:744)
Patrick Haugh
  • 59,226
  • 13
  • 88
  • 96
user2585578
  • 295
  • 6
  • 18

2 Answers2

2

You would use something like

./bin/pyspark --master local[8]


from pyspark import SparkContext
sc = SparkContext("local", "context")
bearrito
  • 2,217
  • 1
  • 25
  • 36
  • 1
    What does `[N]` do in `MASTER_URL`? `--help` does not say: `spark://host:port, mesos://host:port, yarn, k8s://https://host:port, or local (Default: local[*]).` – ijoseph Oct 11 '20 at 20:21
  • This is incidentally [an abuse of URIs...](https://stackoverflow.com/a/1016737/588437) – ijoseph Oct 11 '20 at 20:25
2

The master url is typically the ip address (in case of server) or localhost for standalone system.

Standalone mode: spark://localhost:7077

Server mode: spark://your-master-server-ip-address:7077

Raman Sahasi
  • 30,180
  • 9
  • 58
  • 71
mnm
  • 1,962
  • 4
  • 19
  • 46