8

Hi I have a DataFrame as shown -

ID       X        Y

1      1234      284

1      1396      179

2      8620      178

3      1620      191

3      8820      828

I want split this DataFrame into multiple DataFrames based on ID. So for this example there will be 3 DataFrames. One way to achieve it is to run filter operation in loop. However, I would like to know if it can be done in much more efficient way.

sjishan
  • 3,392
  • 9
  • 29
  • 53
  • Possible duplicate of [How can I split a dataframe into dataframes with same column values in SCALA and SPARK](http://stackoverflow.com/questions/31669308/how-can-i-split-a-dataframe-into-dataframes-with-same-column-values-in-scala-and) – James Tobin May 09 '17 at 18:57
  • Yes. But I am looking for a pyspark version. – sjishan May 09 '17 at 19:06
  • a more optimum solution can be made if the column is stored by partition, then we can perform the calculation parallely at different clusters – Ankit Kumar Namdeo May 10 '17 at 08:27
  • this is exactly i am trying to do. so far I am using parititionBy to store the data and load. I would like to know after doing partitionBy if I can split the dataframe into multiple dataframes based on the partitions. – sjishan May 10 '17 at 23:54

2 Answers2

9
#initialize spark dataframe
df = sc.parallelize([ (1,1234,282),(1,1396,179),(2,8620,178),(3,1620,191),(3,8820,828) ] ).toDF(["ID","X","Y"])

#get the list of unique ID values ; there's probably a better way to do this, but this was quick and easy
listids = [x.asDict().values()[0] for x in df.select("ID").distinct().collect()]
#create list of dataframes by IDs
dfArray = [df.where(df.ID == x) for x in listids]

dfArray[0].show()
+---+----+---+
| ID|   X|  Y|
+---+----+---+
|  1|1234|282|
|  1|1396|179|
+---+----+---+
dfArray[1].show()
+---+----+---+
| ID|   X|  Y|
+---+----+---+
|  2|8620|178|
+---+----+---+

dfArray[2].show()
+---+----+---+
| ID|   X|  Y|
+---+----+---+
|  3|1620|191|
|  3|8820|828|
+---+----+---+
James Tobin
  • 3,070
  • 19
  • 35
  • 2
    You are looping. I think this is the closest to what I am seeking for. http://stackoverflow.com/questions/41663985/spark-dataframe-how-to-efficiently-split-dataframe-for-each-group-based-on-same But it was I/O time associated with it. – sjishan May 09 '17 at 19:26
  • if you want to "get something 'for' each something", there is going to be an inherent loop Somewhere – James Tobin May 09 '17 at 19:29
  • 2
    True But you can map the task to different partition and get a list of DFs. That is what I am trying to do. – sjishan May 09 '17 at 20:23
5

The answer of @James Tobin needs to be altered a tiny bit if you are working with Python 3.X, as dict.values returns a dict-value object instead of a list. A quick workaround is just adding the list function:

listids = [list(x.asDict().values())[0] 
           for x in df.select("ID").distinct().collect()]

Posting as a seperate answer as I do not have the reputation required to put a comment on his answer.

rpanai
  • 12,515
  • 2
  • 42
  • 64
Bebeerna
  • 87
  • 1
  • 6