0

I am trying to create a DataFrame from a list of data and also want to apply schema on it. From the Spark Scala doc I am trying to use this createDataframe signature which accepts list of row and a schema as StructType.

def createDataFrame(rows: List[Row], schema: StructType): DataFrame

Sample Code I am trying below

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
val simpleData = List(Row("James", "Sales", 3000),
  Row("Michael", "Sales", 4600),
  Row("Robert", "Sales", 4100),
  Row("Maria", "Finance", 3000)
)

val schema = StructType(Array(
StructField("name",StringType,false),
StructField("department",StringType,false),
StructField("salary",IntegerType,false)))


val df = spark.createDataFrame(simpleData,schema)

But I am getting below error

command-3391230614683259:15: error: overloaded method value createDataFrame with alternatives:
  (data: java.util.List[_],beanClass: Class[_])org.apache.spark.sql.DataFrame <and>
  (rdd: org.apache.spark.api.java.JavaRDD[_],beanClass: Class[_])org.apache.spark.sql.DataFrame <and>
  (rdd: org.apache.spark.rdd.RDD[_],beanClass: Class[_])org.apache.spark.sql.DataFrame <and>
  (rows: java.util.List[org.apache.spark.sql.Row],schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame <and>
  (rowRDD: org.apache.spark.api.java.JavaRDD[org.apache.spark.sql.Row],schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame <and>
  (rowRDD: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row],schema: org.apache.spark.sql.types.StructType)org.apache.spark.sql.DataFrame
 cannot be applied to (List[org.apache.spark.sql.Row], org.apache.spark.sql.types.StructType)
val df = spark.createDataFrame(simpleData,schema)

Please suggest what I am doing wrong.

1 Answers1

2

The error is telling you that it needs a Java List not a Scala List:

import scala.jdk.CollectionConverters._

val df = spark.createDataFrame(simpleData.asJava, schema)

See this question for alternatives for CollectionConverters if you are using an earlier versions of Scala than 2.13.

Another option is to pass an RDD:

val df = spark.createDataFram(sc.parallelize(simpleData), schema)

sc being the SparkContext object.

jrook
  • 3,459
  • 1
  • 16
  • 33