I can read in a Spark dataframe as a custom object like this:
spark.read.csv("path/to/file").as[Gizmo]
But how can I convert a single Spark Row
object to its equivalent case class? (If you're worried about why I want to do this, please consult this question.) Clearly Spark knows how to do this, but I don't see any straightforward way of accomplishing it (short of converting the Row
into an RDD of length 1 and then converting back).
row.as[Gizmo] // doesn't work. What goes here?