1

I have a DataFrame and I want to add a new column but not based on exit column,what should I do?

This is my dataframe:

+----+
|time|
+----+
|   1|
|   4|
|   3|
|   2|
|   5|
|   7|
|   3|
|   5|
+----+

This is my expect result:

+----+-----+  
|time|index|  
+----+-----+  
|   1|    1|  
|   4|    2|  
|   3|    3|  
|   2|    4|  
|   5|    5|  
|   7|    6|  
|   3|    7|  
|   5|    8|  
+----+-----+  
zero323
  • 322,348
  • 103
  • 959
  • 935
mentongwu
  • 463
  • 2
  • 7
  • 21

2 Answers2

1

use rdd zipWithIndex may be what you want.

val newRdd = yourDF.rdd.zipWithIndex.map{case (r: Row, id: Long) => Row.fromSeq(r.toSeq :+ id)}
val schema = StructType(Array(StructField("time", IntegerType, nullable = true), StructField("index", LongType, nullable = true)))
val newDF = spark.createDataFrame(newRdd, schema)
newDF.show
+----+-----+                                                                    
|time|index|
+----+-----+
|   1|    0|
|   4|    1|
|   3|    2|
|   2|    3|
|   5|    4|
|   7|    5|
|   3|    6|
|   8|    7|
+----+-----+

I assume Your time column is IntegerType here.

neilron
  • 176
  • 6
0

Rather using Window function and converting to rdd and using zipWithIndex are slower, you can use a built in function monotonically_increasing_id as

import org.apache.spark.sql.functions._
df.withColumn("index", monotonically_increasing_id())

Hope this hepls!

koiralo
  • 22,594
  • 6
  • 51
  • 72