7

I got stucked with a data transformation task in pyspark. I want to replace all values of one column in a df with key-value-pairs specified in a dictionary.

dict = {'A':1, 'B':2, 'C':3}

My df looks like this:

+-----------++-----------+
|       col1||       col2|
+-----------++-----------+
|          B||          A|
|          A||          A|
|          A||          A|
|          C||          B|
|          A||          A|
+-----------++-----------+

Now I want to replace all values of col1 by the key-values pairs defined in dict.

Desired Output:

+-----------++-----------+
|       col1||       col2|
+-----------++-----------+
|          2||          A|
|          1||          A|
|          1||          A|
|          3||          B|
|          1||          A|
+-----------++-----------+

I tried

df.na.replace(dict, 1).show()

but that also replaces the values on col2, which shall stay untouched.

Thank you for your help. Greetings :)

getaway22
  • 189
  • 1
  • 2
  • 9
  • I believe that your problem is a usecase for using spark broadcast variables. Check out https://spark.apache.org/docs/2.4.0/rdd-programming-guide.html#broadcast-variables – jonathan Dec 11 '18 at 12:03

3 Answers3

14

Your data:

print df
DataFrame[col1: string, col2: string]
df.show()   
+----+----+
|col1|col2|
+----+----+
|   B|   A|
|   A|   A|
|   A|   A|
|   C|   B|
|   A|   A|
+----+----+

diz = {"A":1, "B":2, "C":3}

Convert values of your dictionary from integer to string, in order to not get errors of replacing different types:

diz = {k:str(v) for k,v in diz.items()}

print diz
{'A': '1', 'C': '3', 'B': '2'}

Replace value of col1

df2 = df.na.replace(diz,1,"col1")
print df2
DataFrame[col1: string, col2: string]

df2.show()
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A|
|   3|   B|
|   1|   A|
+----+----+

If you need to cast your values from String to Integer

from pyspark.sql.types import *

df3 = df2.select(df2["col1"].cast(IntegerType()),df2["col2"]) 
print df3
DataFrame[col1: int, col2: string]

df3.show()
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A| 
|   3|   B|
|   1|   A|
+----+----+
Joop
  • 3,706
  • 34
  • 55
titiro89
  • 2,058
  • 1
  • 19
  • 31
  • What if there is list of values against each key. How would I achieve that? – Aditya Aug 29 '18 at 09:59
  • I think that the question in your comment should be an additional and different Stackoverflow question so you could provide specific examples of what you mean and receive a more accurate and complete answer – titiro89 Sep 03 '18 at 15:32
4

you can also create a simple lambda function to get the dictionary values and update your dataframe column.

+----+----+
|col1|col2|
+----+----+
|   B|   A|
|   A|   A|
|   A|   A|
|   A|   A|
|   C|   B|
|   A|   A|
+----+----+

dict = {'A':1, 'B':2, 'C':3}
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType

user_func =  udf (lambda x: dict.get(x), IntegerType())
newdf = df.withColumn('col1',user_func(df.col1))

>>> newdf.show();
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A|
|   1|   A|
|   3|   B|
|   1|   A|
+----+----+

I hope this also works !

vikrant rana
  • 4,509
  • 6
  • 32
  • 72
0

Before replacing the values of column 1 in my df, i had to automate the generation of my dictionary (given the many keys). This was done as follows:

keys =sorted(df.select('col1').rdd.flatMap(lambda x: x).distinct().collect())

keys
['A', 'B', 'C']

import numpy

maxval = len(keys)
values = list(numpy.array(list(range(maxval)))+1)

values
[1, 2, 3]

making sure (as titiro89 mentions above) that the type of the 'new' values is the same type as the 'old' values (string in this case)

dct = {k:str(v) for k,v in zip(keys,values)}
print(dct)

{'A': '1', 'B': '2', 'C': '3'}

df2 = df.replace(dct,1,"'col1'")
Grant Shannon
  • 4,709
  • 1
  • 46
  • 36