Title says it all:
Is there an equivalent to the SPARK SQL LATERAL VIEW
command in the Spark API so that I can generate a column from a UDF that contains a struct of multiple columns worth of data, and then laterally spread the columns in the struct into the parent dataFrame as individual columns?
Something equivalent to df.select(expr("LATERAL VIEW udf(col1,col2...coln)"))