47

Is there any lib that can replace special characters to ASCII equivalents, like:

"Cześć"

to:

"Czesc"

I can of course create map:

{'ś':'s', 'ć': 'c'}

and use some replace function. But I don't want to hardcode all equivalents into my program, if there is some function that already does that.

MERose
  • 4,048
  • 7
  • 53
  • 79
Tomasz Wysocki
  • 11,170
  • 6
  • 47
  • 62

6 Answers6

48
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import unicodedata
text = u'Cześć'
print unicodedata.normalize('NFD', text).encode('ascii', 'ignore')
nosklo
  • 217,122
  • 57
  • 293
  • 297
  • 9
    'NFKD' would give you ASCII output more often than 'NFD' would. – dan04 Jul 12 '10 at 06:11
  • 8
    it doesnt work for all cases i.e. `(VW Polo) - Zapłon Jak sprawdzić czy działa pompa wspomagania?` converts to `(VW Polo) - Zapon jak sprawdzic czy dziaa pompa wspomagania?` – Szymon Roziewski Apr 30 '15 at 15:46
23

You can get most of the way by doing:

import unicodedata

def strip_accents(text):
    return ''.join(c for c in unicodedata.normalize('NFKD', text) if unicodedata.category(c) != 'Mn')

Unfortunately, there exist accented Latin letters that cannot be decomposed into an ASCII letter + combining marks. You'll have to handle them manually. These include:

  • Æ → AE
  • Ð → D
  • Ø → O
  • Þ → TH
  • ß → ss
  • æ → ae
  • ð → d
  • ø → o
  • þ → th
  • Œ → OE
  • œ → oe
  • ƒ → f
dan04
  • 87,747
  • 23
  • 163
  • 198
21

The package unidecode worked best for me:

from unidecode import unidecode
text = "Björn, Łukasz and Σωκράτης."
print(unidecode(text))
# ==> Bjorn, Lukasz and Sokrates.

You might need to install the package:

pip install unidecode

The above solution is easier and more robust than encoding (and decoding) the output of unicodedata.normalize(), as suggested by other answers.

# This doesn't work as expected:
ret = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore')
print(ret)
# ==> b'Bjorn, ukasz and .'
# Besides not supporting all characters, the returned value is a
# bytes object in python3. To yield a str type:
ret = ret.decode("utf8") # (not required in python2)
normanius
  • 8,629
  • 7
  • 53
  • 83
  • 1
    It translates "ß" into "ss", but "ä" into "a", not "ae". – Robin Dinse Jun 05 '20 at 18:09
  • @RobinDinse This is intentional, see the docu of [unidecode](https://pypi.org/project/Unidecode/) for the reasoning behind this. You can always replace the three umlauts äöü yourself prior to passing a string to unidecode. – normanius Jun 05 '20 at 18:37
5

Try the trans package. Looks very promising. Supports Polish.

Marcin Wojnarski
  • 2,362
  • 24
  • 17
4

I did it this way:

POLISH_CHARACTERS = {
    50309:'a',50311:'c',50329:'e',50562:'l',50564:'n',50099:'o',50587:'s',50618:'z',50620:'z',
    50308:'A',50310:'C',50328:'E',50561:'L',50563:'N',50067:'O',50586:'S',50617:'Z',50619:'Z',}

def encodePL(text):
    nrmtxt = unicodedata.normalize('NFC',text)
    i = 0
    ret_str = []
    while i < len(nrmtxt):
        if ord(text[i])>128: # non ASCII character
            fbyte = ord(text[i])
            sbyte = ord(text[i+1])
            lkey = (fbyte << 8) + sbyte
            ret_str.append(POLISH_CHARACTERS.get(lkey))
            i = i+1
        else: # pure ASCII character
            ret_str.append(text[i])
        i = i+1
    return ''.join(ret_str)

when executed:

encodePL(u'ąćęłńóśźż ĄĆĘŁŃÓŚŹŻ')

it will produce output like this:

u'acelnoszz ACELNOSZZ'

This works fine for me - ;D

1

The unicodedata.normalize gimmick can best be described as half-assci. Here is a robust approach which includes a map for letters with no decomposition. Note the additional map entries in the comments.

John Machin
  • 81,303
  • 11
  • 141
  • 189