2

I have a larger query with several window functions structured via WITH clause. This query runs very well against an amazon-rds and amazon-redshift database executed from a Python script with the Pandas SQL connector or any SQL browser. But this query fails if I run it via the Spark (from Pyspark) jdbs connector. And I can't find any hint why Spark is not eating this query. Any hint welcome. Thanks Alex

I tried the sql fron Pandas and several SQL Browser -> It works well I tried the spark SQL connector with other SQL statements without the WITH clause syntax --> It works well

Below a reduced code example:

the simplified sql query

mysql_test="""
WITH my_raw_table AS

(
    SELECT 
        created_utc || '@' || sub_order_nr AS order_column, 
        operation_type, 
        id_in,
        id_type_in,
        created_utc
    FROM sample.table
)

SELECT DISTINCT 
    operation_type
    ,ROW_NUMBER() OVER window_desc AS row_number
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_first
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_last
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_first
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_last
FROM my_raw_table
WINDOW
    window_desc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column DESC
        ),
    window_asc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column ASC
        )
ORDER BY 
    operation_type
    ,order_column_last
"""

code that works with Pandas

conn=my_modul.get_my_connection()

my_result = pd.read_sql(mysql_test,conn)

conn.close()

my_result.head()

code that fails with Spark jdbc

conn=my_modul.get_my_connection()

my_result = spark.read.jdbc(url=conn['url'], table=mysql_test, properties= conn['properties'])

my_result.show()

The main issue is that it claims WITH as syntax error

Py4JJavaError: An error occurred while calling o551.jdbc.
: org.postgresql.util.PSQLException: ERROR: syntax error at or near "WITH"

and I do not understand why.

The full error message is:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-40-353e32a024e8> in <module>
     11 
     12 
---> 13 verbauwege_spark_sql = spark.read.jdbc(url=conn['url'], table=mysql_test, properties= conn['properties'])
     14 
     15 row_count=verbauwege_spark_sql.count()

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/pyspark/sql/readwriter.py in jdbc(self, url, table, column, lowerBound, upperBound, numPartitions, predicates, properties)
    554             jpredicates = utils.toJArray(gateway, gateway.jvm.java.lang.String, predicates)
    555             return self._df(self._jreader.jdbc(url, table, jpredicates, jprop))
--> 556         return self._df(self._jreader.jdbc(url, table, jprop))
    557 
    558 

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

~/anaconda3/envs/Spark_Python3/lib/python3.7/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o551.jdbc.
: org.postgresql.util.PSQLException: ERROR: syntax error at or near "WITH"
  Position: 15
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2468)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2211)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:309)
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:446)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:370)
    at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:149)
    at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:108)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:61)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
    at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:238)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:745)

Alex Ortner
  • 1,097
  • 8
  • 24
  • Spark puts the table name literally in a `SELECT ... FROM $table ...` statement and therefore you have to enclose the query in braces and turn it into a valid subquery that can go in place of the table name. – Hristo Iliev Aug 30 '19 at 12:03
  • @HristoIliev Ahhh thanks, this hint solved the problem – Alex Ortner Aug 30 '19 at 12:20

1 Answers1

1

Solution is to enclose the full sql in braces and give it an alias in order that spark jdbc can handle it

mysql_test="""
(
WITH my_raw_table AS

(
    SELECT 
        created_utc || '@' || sub_order_nr AS order_column, 
        operation_type, 
        id_in,
        id_type_in,
        created_utc
    FROM sample.table
)

SELECT DISTINCT 
    operation_type
    ,ROW_NUMBER() OVER window_desc AS row_number
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_first
    ,FIRST_VALUE(created_utc) OVER window_desc AS created_utc_last
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_first
    ,FIRST_VALUE(order_column) OVER window_desc AS order_column_last
FROM my_raw_table
WINDOW
    window_desc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column DESC
        ),
    window_asc AS (
        PARTITION BY operation_type,id_type_in,id_in
        ORDER BY order_column ASC
        )
ORDER BY 
    operation_type
    ,order_column_last
) as my_redshift_result
"""
Alex Ortner
  • 1,097
  • 8
  • 24