17

Using Python 3.3. I want to do the following:

  • replace special alphabetical characters such as e acute (é) and o circumflex (ô) with the base character (ô to o, for example)
  • remove all characters except alphanumeric and spaces in between alphanumeric characters
  • convert to lowercase

This is what I have so far:

mystring_modified = mystring.replace('\u00E9', 'e').replace('\u00F4', 'o').lower()
alphnumspace = re.compile(r"[^a-zA-Z\d\s]")
mystring_modified = alphnumspace.sub('', mystring_modified)

How can I improve this? Efficiency is a big concern, especially since I am currently performing the operations inside a loop:

# Pseudocode
for mystring in myfile:
    mystring_modified = # operations described above
    mylist.append(mystring_modified)

The files in question are about 200,000 characters each.

oyra
  • 173
  • 1
  • 1
  • 5
  • 1
    I cannot post an answer cause this question is wrongly marked as duplicate, which absolutely isn't, but maybe I'll manage to put my answer in a comment. Provided `from unidecode import unidecode`, the job will be accomplished by `''.join(c for c in unidecode(mystring).lower() if ord(c) in range(97,123) or ord(c)==32).lstrip().rstrip()`. No regex needed. – mmj Jun 14 '16 at 08:15

2 Answers2

34
>>> import unicodedata
>>> s='éô'
>>> ''.join((c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn'))
'eo'

Also check out unidecode

What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose.

The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be.

Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets.

John La Rooy
  • 295,403
  • 53
  • 369
  • 502
  • This works nicely for removing accents but unless I did something wrong it doesn't seem to address the other aspects of the question. Appreciate the introduction to Unidecode. An interesting read, though it wouldn't work in my case. – oyra Mar 07 '13 at 03:52
  • 1
    its also working def remove_accents(data): return unicodedata.normalize('NFKD', data).encode('ASCII', 'ignore') – Ranvijay Sachan Aug 21 '14 at 04:35
  • @RanvijaySachan What's the difference? – PascalVKooten Nov 12 '17 at 22:08
  • 1
    I don't understand why the `if unicodedata.category(c) != 'Mn'` condition changes the output from accented-character to unaccented? – user1561108 Aug 23 '18 at 17:56
  • I don't think this is completely correct. It does normalize accents, but it also removes all non-english characters entirely. If a sentence contains greek letters like `θ`, those get removed entirely. – sparkonhdfs Feb 14 '20 at 15:08
5

You could use str.translate:

import collections
import string

table = collections.defaultdict(lambda: None)
table.update({
    ord('é'):'e',
    ord('ô'):'o',
    ord(' '):' ',
    ord('\N{NO-BREAK SPACE}'): ' ',
    ord('\N{EN SPACE}'): ' ',
    ord('\N{EM SPACE}'): ' ',
    ord('\N{THREE-PER-EM SPACE}'): ' ',
    ord('\N{FOUR-PER-EM SPACE}'): ' ',
    ord('\N{SIX-PER-EM SPACE}'): ' ',
    ord('\N{FIGURE SPACE}'): ' ',
    ord('\N{PUNCTUATION SPACE}'): ' ',
    ord('\N{THIN SPACE}'): ' ',
    ord('\N{HAIR SPACE}'): ' ',
    ord('\N{ZERO WIDTH SPACE}'): ' ',
    ord('\N{NARROW NO-BREAK SPACE}'): ' ',
    ord('\N{MEDIUM MATHEMATICAL SPACE}'): ' ',
    ord('\N{IDEOGRAPHIC SPACE}'): ' ',
    ord('\N{IDEOGRAPHIC HALF FILL SPACE}'): ' ',
    ord('\N{ZERO WIDTH NO-BREAK SPACE}'): ' ',
    ord('\N{TAG SPACE}'): ' ',
    })
table.update(dict(zip(map(ord,string.ascii_uppercase), string.ascii_lowercase)))
table.update(dict(zip(map(ord,string.ascii_lowercase), string.ascii_lowercase)))
table.update(dict(zip(map(ord,string.digits), string.digits)))

print('123 fôé BAR҉'.translate(table,))

yields

123 foe bar

On the down-side, you'll have to list all the special accented characters that you wish to translate. @gnibbler's method requires less coding.

On the up-side, the str.translate method should be fairly fast and it can handle all your requirements (downcasing, deleting and removing accents) in one function call once the table is set up.


By the way, a file with 200K characters is not very large. So it would be more efficient to read the entire file into a single str, then translate it in one function call.

unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
  • Performance seems identical to my approach (0.96875 seconds in both cases), but this is much less hackish. Thanks. With respect to translating the entire file at once, I need to preserve the text formatting because I'm working with data files such as csv. – oyra Mar 07 '13 at 03:54