5

I'm trying to use the graphframes package in pyspark in Jupyter Notebook (using Sagemaker and sparkmagic) on AWS EMR. I've tried adding a configuration option when creating the EMR cluster in the AWS console:

[{"classification":"spark-defaults", "properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}, "configurations":[]}]

But I still got an error when trying to use the graphframes package in my pyspark code in jupyter notebook.

Here's my code (it's from the graphframes example):

# Create a Vertex DataFrame with unique ID column "id"
v = spark.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = spark.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
from graphframes import *
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.
g.inDegrees.show()

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()

And here's the output/error:

ImportError: No module named graphframes

I read through this git thread but all the potential work-arounds seem very complicated and require ssh-ing into the master node of the EMR cluster.

Bob Swain
  • 3,052
  • 3
  • 17
  • 28

3 Answers3

9

I finally figured out that there is a PyPi package for graphframes. I used this to create a bootstrapping action as detailed here, although I changed things a little bit.

Here's what I did to get graphframes working on EMR:

  1. First I created a shell script and saved it so s3 named "install_jupyter_libraries_emr.sh":
#!/bin/bash

sudo pip install graphframes
  1. I then went through the advanced options EMR creation process in the AWS console.
    • During Step 1, I added in the maven coordinates of the graphframes package within the edit software settings text box:
    [{"classification":"spark-defaults","properties":{"spark.jars.packages":"graphframes:graphframes:0.7.0-spark2.4-s_2.11"}}]
    
    • During Step 3: General Cluster Settings, I went into the bootstrap actions section
    • Within the bootstrap actions section, I added a new custom boostrap action with:
      • an arbitrary name
      • The s3 location of my "install_jupyter_libraries_emr.sh" script
      • no optional arguments
    • I then started the cluster creation
  2. Once my cluster was up, I got into Jupyter and ran my code:
# Create a Vertex DataFrame with unique ID column "id"
v = spark.createDataFrame([
  ("a", "Alice", 34),
  ("b", "Bob", 36),
  ("c", "Charlie", 30),
], ["id", "name", "age"])
# Create an Edge DataFrame with "src" and "dst" columns
e = spark.createDataFrame([
  ("a", "b", "friend"),
  ("b", "c", "follow"),
  ("c", "b", "follow"),
], ["src", "dst", "relationship"])
# Create a GraphFrame
from graphframes import *
g = GraphFrame(v, e)

# Query: Get in-degree of each vertex.
g.inDegrees.show()

# Query: Count the number of "follow" connections in the graph.
g.edges.filter("relationship = 'follow'").count()

# Run PageRank algorithm, and show results.
results = g.pageRank(resetProbability=0.01, maxIter=20)
results.vertices.select("id", "pagerank").show()

And this time, finally, I got the correct output:

+---+--------+
| id|inDegree|
+---+--------+
|  c|       1|
|  b|       2|
+---+--------+

+---+------------------+
| id|          pagerank|
+---+------------------+
|  b|1.0905890109440908|
|  a|              0.01|
|  c|1.8994109890559092|
+---+------------------+
Bob Swain
  • 3,052
  • 3
  • 17
  • 28
  • 1
    wonderful answer, i appreciate that you came back with your solution. If it were up to me I would give you all the fake internet points. Thanks so much – Joe S Aug 23 '19 at 16:41
  • 1
    With the latest AWS EMR cluster I had to use "sudo pip-3.6 install graphframes" to make it work (instead of simply pip) – ddegtyarev Sep 20 '19 at 11:14
  • Is STEP 1 really needed? The [graphframes documentation](https://graphframes.github.io/graphframes/docs/_site/quick-start.html) says "We use the --packages argument to download the graphframes package and any dependencies automatically." – panc Jul 17 '21 at 01:56
  • @panc It may not be needed any more, I haven't tested. At the time, if I just did the spark package, but not the pip install, when I would try to import graphframes in jupyter notebook, I would get an error that it couldn't find the graphframes package. – Bob Swain Jul 18 '21 at 02:35
7

@Bob Swain's answer is fine, but now the repositories for graphframes are located at https://repos.spark-packages.org/. Thus, in order to make it work the classification should be changed to:

[
 {
  "classification":"spark-defaults",
  "properties":{
    "spark.jars.packages":"graphframes:graphframes:0.8.0-spark2.4-s_2.11",
    "spark.jars.repositories":"https://repos.spark-packages.org/"
  }
 }
]
Mathias Longo
  • 71
  • 1
  • 5
  • 1
    I am using sagemaker's sparkmagic kernel (pyspark) and EMR with Livy to run pyspark code. By adding those two configuration in Sparkmagic's `%%config`, I can successfully run pyspark graphframes examples. – panc Jul 17 '21 at 01:58
  • @panc Could you please share more details of your solution – Shweta Apr 12 '23 at 17:35
  • @Shweta I have a setup with Sagemaker instance backed by EMR, similar to that created by following [this guide](https://aws.amazon.com/blogs/machine-learning/build-amazon-sagemaker-notebooks-backed-by-spark-in-amazon-emr/). Then, in the jupyter notebook, I use sparkmagic's pyspark kernel, [this](https://github.com/jupyter-incubator/sparkmagic/blob/master/examples/Pyspark%20Kernel.ipynb) is an example. Additional configuration examples can be found in [AWS doc](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-studio-magics.html). – panc Apr 12 '23 at 23:36
0

My requirement: To Map the property graph model in Hadoop (AWS EMR) and store it somewhere so we can query later.

Is there any option to get the result in as a graph also? I have read few documents on Graphx which is a library supported by APache spark , I don't have clear idea