6

Using pyspark:

from pyspark.sql import SparkSession

spark = SparkSession\
    .builder\
    .appName("spark play")\
    .getOrCreate()    

df = spark.read\
    .format("jdbc")\
    .option("url", "jdbc:mysql://localhost:port")\
    .option("dbtable", "schema.tablename")\
    .option("user", "username")\
    .option("password", "password")\
    .load()

Rather than fetch "schema.tablename", I would prefer to grab the result set of a query.

PBL
  • 115
  • 3
  • 7

1 Answers1

15

Same as in 1.x you can pass valid subquery as dbtable argument for example:

...
.option("dbtable", "(SELECT foo, bar FROM schema.tablename) AS tmp")
...
Community
  • 1
  • 1
zero323
  • 322,348
  • 103
  • 959
  • 935
  • 1
    At least in `Scala` (not sure of `Python` / `R`), you can give `spark.read.jdbc(url, s"($sql) ql", properties)` where `sql` is a `String` containing your actual `SQL` *query* [`Spark 2.2.0`] – y2k-shubham Mar 01 '18 at 06:54
  • I get a null pointer exception when I try to use a custom sql, but table names work just fine :( This is using pyspark2, with teradata jdbc – suhprano Apr 09 '18 at 21:26