(Using apache spark version 1.6) I referred below link to attempt unpivot feature: unpivot in spark-sql/pyspark
The issue here I'm getting some runtime exception when executing :
df.select($"A", expr("stack(2, 'large', large, 'small', small) as (c, d)")).where("c is not null or b is not null")
Exception:
User class threw exception: java.lang.Exception: Application failed with 1 errors: Action UnPivot3: java.lang.RuntimeException: [1.10] failure: identifier expected
stack(2, 'large', large,'small', small) as (c, d)
^
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.SqlParser$.parseExpression(SqlParser.scala:49)
at org.apache.spark.sql.functions$.expr(functions.scala:1076)
1) Any idea how to resolve this ? 2) Any documentation help regarding stack() will be great.