20

Are there any standalonenish solutions for normalizing international unicode text to safe ids and filenames in Python?

E.g. turn My International Text: åäö to my-international-text-aao

plone.i18n does really good job, but unfortunately it depends on zope.security and zope.publisher and some other packages making it fragile dependency.

Some operations that plone.i18n applies

Deduplicator
  • 44,692
  • 7
  • 66
  • 118
Mikko Ohtamaa
  • 82,057
  • 50
  • 264
  • 435
  • 2
    "My International Text: åäö" is a perfectly valid filename on all of the systems I use, so you might want to be a bit more specific. For example, exactly what characters do you want to (dis)allow? – Laurence Gonsalves Jan 28 '12 at 03:00
  • 2
    @LaurenceGonsalves It might be perfectly valid, but that doesn't mean it will necessarily survive a particular web server / web browser / web OS combo when downloading. When that bug report arrives it's usually faster to just strip the accents than try to figure out where the problem lies. – millimoose Jan 28 '12 at 03:09
  • 2
    possible duplicate of [What is the best way to remove accents in a python unicode string?](http://stackoverflow.com/questions/517923/what-is-the-best-way-to-remove-accents-in-a-python-unicode-string) – johnsyweb Jan 28 '12 at 03:11
  • 1
    Looke at how [`unidecode`](http://pypi.python.org/pypi/Unidecode)`(`[`slugify`](https://github.com/mozilla/unicode-slugify/blob/master/slugify/__init__.py)`(u'My International Text: åäö'))` are implemented [ignore django dependence it is not necessary for Unicode input]. – jfs Jan 28 '12 at 03:25

5 Answers5

35

What you want to do is also known as "slugify" a string. Here's a possible solution:

import re
from unicodedata import normalize

_punct_re = re.compile(r'[\t !"#$%&\'()*\-/<=>?@\[\\\]^_`{|},.:]+')

def slugify(text, delim=u'-'):
    """Generates an slightly worse ASCII-only slug."""
    result = []
    for word in _punct_re.split(text.lower()):
        word = normalize('NFKD', word).encode('ascii', 'ignore')
        if word:
            result.append(word)
    return unicode(delim.join(result))

Usage:

>>> slugify(u'My International Text: åäö')
u'my-international-text-aao'

You can also change the delimeter:

>>> slugify(u'My International Text: åäö', delim='_')
u'my_international_text_aao'

Source: Generating Slugs

For Python 3: pastebin.com/ft7Yb3KS (thanks @MrPoxipol).

Community
  • 1
  • 1
juliomalegria
  • 24,229
  • 14
  • 73
  • 89
4

The way to solve this problem is to make a decision on which characters are allowed (different systems have different rules for valid identifiers.

Once you decide on which characters are allowed, write an allowed() predicate and a dict subclass for use with str.translate:

def makesafe(text, allowed, substitute=None):
    ''' Remove unallowed characters from text.
        If *substitute* is defined, then replace
        the character with the given substitute.
    '''
    class D(dict):
        def __getitem__(self, key):
            return key if allowed(chr(key)) else substitute
    return text.translate(D())

This function is very flexible. It let's you easily specify rules for deciding which text is kept and which text is either replaced or removed.

Here's a simple example using the rule, "only allow characters that are in the unicode category L":

import unicodedata

def allowed(character):
    return unicodedata.category(character).startswith('L')

print(makesafe('the*ides&of*march', allowed, '_'))
print(makesafe('the*ides&of*march', allowed))

That code produces safe output as follows:

the_ides_of_march
theidesofmarch
Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485
  • Having substitute be a function of the non-allowed character would make this rather more flexible. Consider for example a perfectly valid finnish word hääyöaie, and how it would be molested to something like hyaie or h--y-aie with your current substitution mechanism. – Tuure Laurinolli Jan 28 '12 at 10:27
2

The following will remove accents from whatever characters Unicode can decompose into combining pairs, discard any weird characters it can't, and nuke whitespace:

# encoding: utf-8
from unicodedata import normalize
import re

original = u'ľ š č ť ž ý á í é'
decomposed = normalize("NFKD", original)
no_accent = ''.join(c for c in decomposed if ord(c)<0x7f)
no_spaces = re.sub(r'\s', '_', no_accent)

print no_spaces
# output: l_s_c_t_z_y_a_i_e

It doesn't try to get rid of characters disallowed on filesystems, but you can steal DANGEROUS_CHARS_REGEX from the file you linked for that.

millimoose
  • 39,073
  • 9
  • 82
  • 134
2

I'll throw my own (partial) solution here too:

import unicodedata

def deaccent(some_unicode_string):
    return u''.join(c for c in unicodedata.normalize('NFD', some_unicode_string)
               if unicodedata.category(c) != 'Mn')

This does not do all you want, but gives a few nice tricks wrapped up in a convenience method: unicode.normalise('NFD', some_unicode_string) does a decomposition of unicode characters, for example, it breaks 'ä' to two unicode codepoints U+03B3 and U+0308.

The other method, unicodedata.category(char), returns the enicode character category for that particular char. Category Mn contains all combining accents, thus deaccent removes all accents from the words.

But note, that this is just a partial solution, it gets rid of accents. You still need some sort of whitelist of characters you want to allow after this.

Nailor
  • 145
  • 6
0

I'd go with

https://pypi.python.org/pypi?%3Aaction=search&term=slug

Its hard to come up with a scenario where one of these does not fit your needs.

kert
  • 2,161
  • 21
  • 22