0

(Using apache spark version 1.6) I referred below link to attempt unpivot feature: unpivot in spark-sql/pyspark

The issue here I'm getting some runtime exception when executing :

df.select($"A", expr("stack(2, 'large', large, 'small', small) as (c, d)")).where("c is not null or b is not null")

Exception:

User class threw exception: java.lang.Exception: Application failed with 1 errors: Action UnPivot3: java.lang.RuntimeException: [1.10] failure: identifier expected
stack(2, 'large', large,'small', small) as (c, d)
^
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.SqlParser$.parseExpression(SqlParser.scala:49)
at org.apache.spark.sql.functions$.expr(functions.scala:1076)

1) Any idea how to resolve this ? 2) Any documentation help regarding stack() will be great.

Shabeel
  • 75
  • 2
  • 9

1 Answers1

1

Stack was added in this commit: https://github.com/apache/spark/commit/d0d28507cacfca5919dbfb4269892d58b62e8662 for Jira ticket: https://issues.apache.org/jira/browse/SPARK-16286

It's fix version is Spark 2.0, so you must update your Spark version to at least 2.0 to use Stack function

T. Gawęda
  • 15,706
  • 4
  • 46
  • 61
  • Is there any alternative in 1.6 to achieve same functionality ? I cant change the spark version which is deployed in server. – Shabeel Jul 19 '17 at 11:05
  • @Shabeel I don't think so, but it's out of question scope ;) You can try explode function – T. Gawęda Jul 19 '17 at 11:07