5

I have the following test code:

from pyspark import SparkContext, SQLContext
sc = SparkContext('local')
sqlContext = SQLContext(sc)
print('Created spark context!')


if __name__ == '__main__':
    df = sqlContext.read.format("jdbc").options(
        url="jdbc:mysql://localhost/mysql",
        driver="com.mysql.jdbc.Driver",
        dbtable="users",
        user="user",
        password="****",
        properties={"driver": 'com.mysql.jdbc.Driver'}
    ).load()

    print(df)

When I run it, I get the following error:

java.lang.ClassNotFoundException: com.mysql.jdbc.Driver

In Scala, this is solved by importing the .jar mysql-connector-java into the project.

However, in python I have no idea how to tell the pyspark module to link the mysql-connector file.

I have seen this solved with examples like

spark --package=mysql-connector-java testfile.py

But I don't want this since it forces me to run my script in a weird way. I would like an all python solution or copy a file somewhere or, add something to the Path.

Santi Peñate-Vera
  • 1,053
  • 4
  • 33
  • 68

3 Answers3

8

You can pass arguments to spark-submit when creating your sparkContext before SparkConf is initialized:

import os
from pyspark import SparkConf, SparkContext

SUBMIT_ARGS = "--packages mysql:mysql-connector-java:5.1.39 pyspark-shell"
os.environ["PYSPARK_SUBMIT_ARGS"] = SUBMIT_ARGS
conf = SparkConf()
sc = SparkContext(conf=conf)

or you can add them to your $SPARK_HOME/conf/spark-defaults.conf

MaFF
  • 9,551
  • 2
  • 32
  • 41
  • Hi, I get this error: `requirement failed: Provided Maven Coordinates must be in the form 'groupId:artifactId:version'. The coordinate provided is: mysql-connector-java`, So I guess the arguments are expected in another format – Santi Peñate-Vera Sep 03 '17 at 22:36
  • Please change the current package by this `mysql:mysql-connector-java:5.1.39` and then it works – Santi Peñate-Vera Sep 03 '17 at 22:44
  • you are right, you can also load it as a a jar with `--jars path_to/mysql-connector-java.jar` but it won't install dependencies if any. I'll modify the answer so that it's correct – MaFF Sep 04 '17 at 05:28
5

from pyspark.sql import SparkSession

spark = SparkSession\
    .builder\
    .appName("Word Count")\
    .config("spark.driver.extraClassPath", "/home/tuhin/mysql.jar")\
    .getOrCreate()

dataframe_mysql = spark.read\
    .format("jdbc")\
    .option("url", "jdbc:mysql://localhost/database_name")\
    .option("driver", "com.mysql.jdbc.Driver")\
    .option("dbtable", "employees").option("user", "root")\
    .option("password", "12345678").load()

print(dataframe_mysql.columns)

"/home/tuhin/mysql.jar" is the location of mysql jar file

1

If you are using pycharm and want to run line by line instead of submitting your .py through spark-submit, you can copy your .jar to c:\spark\jars\ and your code could be like:

from pyspark import SparkConf, SparkContext, sql
from pyspark.sql import SparkSession
sc = SparkSession.builder.getOrCreate()
sqlContext = sql.SQLContext(sc)
source_df = sqlContext.read.format('jdbc').options(
    url='jdbc:mysql://localhost:3306/database1',
    driver='com.mysql.cj.jdbc.Driver', #com.mysql.jdbc.Driver
    dbtable='table1',
    user='root',
    password='****').load()
print (source_df)
source_df.show()