0

I'm trying to use PySpark but I'm facing the following issue. I don't really understand the problem but it seems that my computer can't find the "com.mysql.jdbc.Driver" driver. I'm using a Mac OS, so I've downloaded the '.pkg' MySQL connector on https://dev.mysql.com/downloads/connector/python/ and then I installed it. However it still doesn't work and I get the following error.

Py4JJavaError: An error occurred while calling o36.load.
: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
    at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:466)
    at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:563)
    at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
    at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$5.apply(JDBCOptions.scala:99)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$5.apply(JDBCOptions.scala:99)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:99)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:564)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:844)

My python code is the following:

dataframe_mysql = sqlContext.read.format("jdbc").options(
url ="jdbc:mysql://XXXXXXXXX:PWDDDDDD@XXXXXXXXJDDD/"+newDatabaseName,
driver="com.mysql.jdbc.Driver",
dbtable="Scores",
user="XXXXXXXXX",
password="PWDDDDDD"
).load()

1 Answers1

0

Download MySql JDBC driver and add -- jars in spark submit command.

spark-submit --jars spark_code.py

Hassan Ali
  • 26
  • 2
  • but i want to run a script using Spyder – mike.depetriconi Oct 26 '19 at 16:07
  • 'PYTHONSTARTUP=/path/to/script/below' inside the project's properties section. Add Spark submit option here DRIVER_JAVA_OPTIONS = "'" + DRIVER_JAVA_OPTIONS + "'" PYSPARK_SUBMIT_ARGS = ' --master ' + MASTER # Remember to set MASTER on UNIX CLI or in the IDE! PYSPARK_SUBMIT_ARGS += ' --driver-java-options ' + DRIVER_JAVA_OPTIONS # Built above. – Hassan Ali Oct 26 '19 at 18:33