38

How do I change the special characters to the usual alphabet letters? This is my dataframe:

In [56]: cities
Out[56]:

Table Code  Country         Year        City        Value       
240         Åland Islands   2014.0      MARIEHAMN   11437.0 1
240         Åland Islands   2010.0      MARIEHAMN   5829.5  1
240         Albania         2011.0      Durrës      113249.0
240         Albania         2011.0      TIRANA      418495.0
240         Albania         2011.0      Durrës      56511.0 

I want it to look like this:

In [56]: cities
Out[56]:

Table Code  Country         Year        City        Value       
240         Aland Islands   2014.0      MARIEHAMN   11437.0 1
240         Aland Islands   2010.0      MARIEHAMN   5829.5  1
240         Albania         2011.0      Durres      113249.0
240         Albania         2011.0      TIRANA      418495.0
240         Albania         2011.0      Durres      56511.0 
ZygD
  • 22,092
  • 39
  • 79
  • 102
Marius
  • 397
  • 1
  • 3
  • 5
  • Related? http://stackoverflow.com/q/517923/1639625 – tobias_k Jun 20 '16 at 15:28
  • Possible duplicate of [What is the best way to remove accents in a Python unicode string?](https://stackoverflow.com/questions/517923/what-is-the-best-way-to-remove-accents-in-a-python-unicode-string) – phuclv Apr 04 '18 at 13:50

6 Answers6

94

The pandas method is to use the vectorised str.normalize combined with str.decode and str.encode:

In [60]:
df['Country'].str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8')

Out[60]:
0    Aland Islands
1    Aland Islands
2          Albania
3          Albania
4          Albania
Name: Country, dtype: object

So to do this for all str dtypes:

In [64]:
cols = df.select_dtypes(include=[np.object]).columns
df[cols] = df[cols].apply(lambda x: x.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8'))
df

Out[64]:
   Table Code        Country    Year       City      Value
0         240  Aland Islands  2014.0  MARIEHAMN  11437.0 1
1         240  Aland Islands  2010.0  MARIEHAMN  5829.5  1
2         240        Albania  2011.0     Durres   113249.0
3         240        Albania  2011.0     TIRANA   418495.0
4         240        Albania  2011.0     Durres    56511.0
EdChum
  • 376,765
  • 198
  • 813
  • 562
8

With pandas series example

import unidecode

def remove_accents(a):
    return unidecode.unidecode(a.decode('utf-8'))

df['column'] = df['column'].apply(remove_accents)

in this case decode asciis

ZygD
  • 22,092
  • 39
  • 79
  • 102
Caio Andrian
  • 81
  • 1
  • 3
4

This is for Python 2.7. For converting to ASCII you might want to try:

import unicodedata

unicodedata.normalize('NFKD', u"Durrës Åland Islands").encode('ascii','ignore')
'Durres Aland Islands'
advance512
  • 1,327
  • 8
  • 20
1

I want to remove all de accents in all the names of columns so I used

df.columns = df.columns.str.normalize('NFKD').str.encode('ascii',errors='ignore').str.decode('utf-8')
Joselin Ceron
  • 474
  • 5
  • 3
0

This is a comparison of different methods. This should help choose the method, depending on use case. I personally tend to prefer unidecode, because it both, does not keep non-ascii characters, and does not remove them.

from unidecode import unidecode
import unicodedata
import pandas as pd

def unicodedata_1(s):
    nfkd_form = unicodedata.normalize('NFKD', s)
    return ''.join([c for c in nfkd_form if not unicodedata.combining(c)])
def unicodedata_2(s):
    nfd_form = unicodedata.normalize('NFD', s)
    return ''.join(c for c in nfd_form if unicodedata.category(c) != 'Mn')


df = pd.DataFrame({'original': ['|ẞŁł|', '|ĄąčÖ|', '|_x-2|', '|©α|', '|值|']})

df['unidecode'] = df['original'].apply(unidecode)
df['str.normalize'] = df['original'].str.normalize('NFKD').str.encode('ascii', 'ignore').str.decode('utf-8')
df['unicodedata_1'] = df['original'].apply(unicodedata_1)
df['unicodedata_2'] = df['original'].apply(unicodedata_2)

print(df)
#   original unidecode str.normalize unicodedata_1 unicodedata_2
# 0    |ẞŁł|    |SsLl|            ||         |ẞŁł|         |ẞŁł|
# 1   |ĄąčÖ|    |AacO|        |AacO|        |AacO|        |AacO|
# 2   |_x-2|    |_x-2|        |_x-2|        |_x-2|        |_x-2|
# 3     |©α|    |(c)a|            ||          |©α|          |©α|
# 4      |值|    |Zhi |            ||           |值|           |值|
ZygD
  • 22,092
  • 39
  • 79
  • 102
-10

Use this code:

df['Country'] = df['Country'].str.replace(u"Å", "A")
df['City'] = df['City'].str.replace(u"ë", "e")

See here! Of course you should do it then for every special character and every column.

Community
  • 1
  • 1
Blind0ne
  • 1,015
  • 12
  • 28