8

I am using standalone cluster on my local windows and trying to load data from one of our server using following code -

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.load(source="jdbc", url="jdbc:postgresql://host/dbname", dbtable="schema.tablename")

I have set the SPARK_CLASSPATH as -

os.environ['SPARK_CLASSPATH'] = "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\postgresql-9.2-1002.jdbc3.jar"

While executing sqlContext.load, it throws error mentioning "No suitable driver found for jdbc:postgresql". I have tried searching web, but not able to find solution.

Soni Shashank
  • 221
  • 1
  • 3
  • 9
  • Its "No suitable driver found for jdbc:postgresql" only.. updated question. – Soni Shashank Apr 16 '15 at 09:22
  • 1
    Well in that case the required jar file with the driver is not available. –  Apr 16 '15 at 09:23
  • required jar file is present, but somehow SPARK is not able to recognize it. There is some issue regarding SPARK_CLASSPATH. I am not sure on how to set the SPARK_CLASSPATH. – Soni Shashank Apr 16 '15 at 09:26
  • _"..\postgresql-9.2-1002.jdbc3"_ doesn't sound like the name of a jar file as they usually end in `.jar`. You need to add the jar file to the classpath, not the folder containing the jar file. – Mark Rotteveel Apr 18 '15 at 16:04
  • added that Mark but still not working... – Soni Shashank Apr 20 '15 at 06:15
  • how are you running your script? – eliasah Apr 23 '15 at 11:52
  • I am not running any script, I am simply using pyspark shell. Please refer to the detailed question here - http://stackoverflow.com/questions/29821518/apache-spark-jdbc-connection-not-working – Soni Shashank Apr 23 '15 at 12:00

2 Answers2

5

May be it will be helpful.

In my environment SPARK_CLASSPATH contains path to postgresql connector

from pyspark import SparkContext, SparkConf
from pyspark.sql import DataFrameReader, SQLContext
import os

sparkClassPath = os.getenv('SPARK_CLASSPATH', '/path/to/connector/postgresql-42.1.4.jar')

# Populate configuration
conf = SparkConf()
conf.setAppName('application')
conf.set('spark.jars', 'file:%s' % sparkClassPath)
conf.set('spark.executor.extraClassPath', sparkClassPath)
conf.set('spark.driver.extraClassPath', sparkClassPath)
# Uncomment line below and modify ip address if you need to use cluster on different IP address
#conf.set('spark.master', 'spark://127.0.0.1:7077')

sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)

url = 'postgresql://127.0.0.1:5432/postgresql'
properties = {'user':'username', 'password':'password'}

df = DataFrameReader(sqlContext).jdbc(url='jdbc:%s' % url, table='tablename', properties=properties)

df.printSchema()
df.show()

This piece of code allows to use pyspark where you need. For example, I've used it in Django project.

avkghost
  • 51
  • 1
  • 3
3

I had the same problem with mysql, and was never able to get it to work with the SPARK_CLASSPATH approach. However I did get it to work with extra command line arguments, see the answer to this question

To avoid having to click through to get it working, here's what you have to do:

pyspark --conf spark.executor.extraClassPath=<jdbc.jar> --driver-class-path <jdbc.jar> --jars <jdbc.jar> --master <master-URL>
Community
  • 1
  • 1
8forty
  • 545
  • 4
  • 13