I am trying to efficiently strip punctuation from a unicode string. With a regular string, using mystring.translate(None, string.punctuation)
is clearly the fastest approach. However, this code breaks on a unicode string in Python 2.7. As the comments to this answer explain, the translate method can still be implemented, but it must be implement with a dictionary. When I use this implementation though, I find that translate's performance is dramatically reduced. Here is my timing code (copied primarily from this answer):
import re, string, timeit
import unicodedata
import sys
#String from this article www.wired.com/design/2013/12/find-the-best-of-reddit-with-this-interactive-map/
s = "For me, Reddit brings to mind Obi Wan’s enduring description of the Mos Eisley cantina: a wretched hive of scum and villainy. But, you know, one you still kinda want to hang out in occasionally. The thing is, though, Reddit isn’t some obscure dive bar in a remote corner of the universe—it’s a huge watering hole at the very center of it. The site had some 400 million unique visitors in 2012. They can’t all be Greedos. So maybe my problem is just that I’ve never been able to find the places where the decent people hang out."
su = u"For me, Reddit brings to mind Obi Wan’s enduring description of the Mos Eisley cantina: a wretched hive of scum and villainy. But, you know, one you still kinda want to hang out in occasionally. The thing is, though, Reddit isn’t some obscure dive bar in a remote corner of the universe—it’s a huge watering hole at the very center of it. The site had some 400 million unique visitors in 2012. They can’t all be Greedos. So maybe my problem is just that I’ve never been able to find the places where the decent people hang out."
exclude = set(string.punctuation)
regex = re.compile('[%s]' % re.escape(string.punctuation))
def test_set(s):
return ''.join(ch for ch in s if ch not in exclude)
def test_re(s): # From Vinko's solution, with fix.
return regex.sub('', s)
def test_trans(s):
return s.translate(None, string.punctuation)
tbl = dict.fromkeys(i for i in xrange(sys.maxunicode)
if unicodedata.category(unichr(i)).startswith('P'))
def test_trans_unicode(su):
return su.translate(tbl)
def test_repl(s): # From S.Lott's solution
for c in string.punctuation:
s=s.replace(c,"")
return s
print "sets :",timeit.Timer('f(s)', 'from __main__ import s,test_set as f').timeit(1000000)
print "regex :",timeit.Timer('f(s)', 'from __main__ import s,test_re as f').timeit(1000000)
print "translate :",timeit.Timer('f(s)', 'from __main__ import s,test_trans as f').timeit(1000000)
print "replace :",timeit.Timer('f(s)', 'from __main__ import s,test_repl as f').timeit(1000000)
print "sets (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_set as f').timeit(1000000)
print "regex (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_re as f').timeit(1000000)
print "translate (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_trans_unicode as f').timeit(1000000)
print "replace (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_repl as f').timeit(1000000)
As my results show, the unicode implementation of translate performs horribly:
sets : 38.323941946
regex : 6.7729549408
translate : 1.27428412437
replace : 5.54967689514
sets (unicode) : 43.6268708706
regex (unicode) : 7.32343912125
translate (unicode) : 54.0041439533
replace (unicode) : 17.4450061321
My question is whether there is a faster way to implement translate for unicode (or any other method) that would outperform regex.