18

I want to change schema of existing dataframe,while changing the schema I'm experiencing error.Is it possible I can change the existing schema of a dataframe.

val customSchema=StructType(
      Array(
        StructField("data_typ", StringType, nullable=false),
        StructField("data_typ", IntegerType, nullable=false),
        StructField("proc_date", IntegerType, nullable=false),
        StructField("cyc_dt", DateType, nullable=false),
        ));

val readDF=
+------------+--------------------+-----------+--------------------+
|DatatypeCode|         Description|monthColNam|     timeStampColNam|
+------------+--------------------+-----------+--------------------+
|       03099|Volumetric/Expand...|     201867|2018-05-31 18:25:...|
|       03307|  Elapsed Day Factor|     201867|2018-05-31 18:25:...|
+------------+--------------------+-----------+--------------------+

val rows= readDF.rdd
val readDF1 = sparkSession.createDataFrame(rows,customSchema)

expected result

val newdf=
    +------------+--------------------+-----------+--------------------+
    |data_typ_cd |       data_typ_desc|proc_dt    |     cyc_dt         |
    +------------+--------------------+-----------+--------------------+
    |       03099|Volumetric/Expand...|     201867|2018-05-31 18:25:...|
    |       03307|  Elapsed Day Factor|     201867|2018-05-31 18:25:...|
    +------------+--------------------+-----------+--------------------+

Any help will be appricated

user9318576
  • 389
  • 1
  • 4
  • 13

2 Answers2

12

You cannot change schema like this. Schema object passed to createDataFrame has to match the data, not the other way around:

user9876218
  • 136
  • 1
  • 2
12

You can do something like this to change the datatype from one to other.

I have created a dataframe similar to yours like below:

import sparkSession.sqlContext.implicits._
import org.apache.spark.sql.types._

var df = Seq(("03099","Volumetric/Expand...", "201867", "2018-05-31 18:25:00"),("03307","Elapsed Day Factor", "201867", "2018-05-31 18:25:00"))
  .toDF("DatatypeCode","data_typ", "proc_date", "cyc_dt")

df.printSchema()
df.show()

This gives me the following output:

root
 |-- DatatypeCode: string (nullable = true)
 |-- data_typ: string (nullable = true)
 |-- proc_date: string (nullable = true)
 |-- cyc_dt: string (nullable = true)

+------------+--------------------+---------+-------------------+
|DatatypeCode|            data_typ|proc_date|             cyc_dt|
+------------+--------------------+---------+-------------------+
|       03099|Volumetric/Expand...|   201867|2018-05-31 18:25:00|
|       03307|  Elapsed Day Factor|   201867|2018-05-31 18:25:00|
+------------+--------------------+---------+-------------------+

If you see the schema above all the columns are of type String. Now I want to change the column proc_date to Integer type and cyc_dt to Date type, I will do the following:

df = df.withColumnRenamed("DatatypeCode", "data_type_code")

df = df.withColumn("proc_date_new", df("proc_date").cast(IntegerType)).drop("proc_date")

df = df.withColumn("cyc_dt_new", df("cyc_dt").cast(DateType)).drop("cyc_dt")

and when you check the schema of this dataframe

df.printSchema()

then it gives the output as following with the new column names:

root
 |-- data_type_code: string (nullable = true)
 |-- data_typ: string (nullable = true)
 |-- proc_date_new: integer (nullable = true)
 |-- cyc_dt_new: date (nullable = true)
Prasad Khode
  • 6,602
  • 11
  • 44
  • 59
  • 2
    Prefer `col("columnName")` to `df("columnName")`, so you can avoid internal references to your dataframe's name, and you can write code in a fluent fashion as a single expression. – Michele Piccolini Jun 28 '21 at 09:10
  • 1
    yes, this was answered a very long time back, currently, I am using the same way that you are referring to... – Prasad Khode Jun 29 '21 at 04:27