2

For the sentences:

"I am very hungry,    so mum brings me a cake!

I want it split by delimiters, and I want all the delimiters except space to be saved as well. So the expected output is :

"I"  "am"  "very"  "hungry"   ","   "so", "mum"  "brings"  "me"   "a"   "cake"    "!"    "\n"

What I am currently doing is re.split(r'([!:''".,(\s+)\n])', text), which split the whole sentences but also saved a lot of space characters which I don't want. I've also tried the regular expression \s|([!:''".,(\s+)\n]) , which gives me a lot of None somehow.

curlpipesudobash
  • 691
  • 1
  • 10
  • 21
Jack2019
  • 275
  • 2
  • 10
  • 1
    Almost(?) a duplicate of [In Python, how do I split a string and keep the separators?](https://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators). – timgeb Nov 10 '18 at 12:03

3 Answers3

1

That is because your regular expression contains a capture group. Because of that capture group, it will also include the matches in the result. But this is likely what you want.

The only challenge is to filter out the Nones (and other values with truthiness False) in case there is no match, we can do this with:

def tokenize(text):
    return filter(None, re.split(r'[ ]+|([!:''".,\s\n])', text))

For your given sample text, this produces:

>>> list(tokenize("I am very hungry,    so mum brings me a cake!\n"))
['I', 'am', 'very', 'hungry', ',', 'so', 'mum', 'brings', 'me', 'a', 'cake', '!', '\n']
Willem Van Onsem
  • 443,496
  • 30
  • 428
  • 555
  • why adding the [ ]+| into the reg expression would lead to generating a lot of Nones ? I – Jack2019 Nov 10 '18 at 14:01
  • @SoManyProblems: because if the capture group (the part in the parenthesis), does not matches anything, it still introduces a `None` for "empty" capture groups. If you generate multiple parenthesis, this can even result in a lot of extra elements. – Willem Van Onsem Nov 10 '18 at 14:02
  • thanks a lot for the reply. Just to confirm that I understand you correctly, do you mean that [ ]+ matches the space, so it did the split work, and because it doesn't have a (), so it would returns None back ? – Jack2019 Nov 10 '18 at 14:42
  • @SoManyProblems: the regex itself has a `(...)`, a capture group. But since that capture group is *not* "activated" (it ddoes not matches anything), it captures `None`. – Willem Van Onsem Nov 10 '18 at 14:43
1

One approach is to surround the special characters (,!.\n) with space and then split on space:

import re


def tokenize(t, pattern="([,!.\n])"):
    return [e for e in re.sub(pattern, r" \1 ", t).split(' ') if e]


s = "I am very hungry,    so mum brings me a cake!\n"

print(tokenize(s))

Output

['I', 'am', 'very', 'hungry', ',', 'so', 'mum', 'brings', 'me', 'a', 'cake', '!', '\n']
Dani Mesejo
  • 61,499
  • 6
  • 49
  • 76
1

search or findall might be more appropriate here than split:

import re

s = "I am very hungry,    so mum brings me a !#$#@  cake!"

print(re.findall(r'[^\w\s]+|\w+', s))

# ['I', 'am', 'very', 'hungry', ',', 'so', 'mum', 'brings', 'me', 'a', '!#$#@', 'cake', '!']

The pattern [^\w\s]+|\w+ means: a sequence of symbols which are neither alphanumeric nor whitespace OR a sequence of alphanumerics (that is, a word)

georg
  • 211,518
  • 52
  • 313
  • 390
  • could you explain a little bit how the pattern is constructed this way ? why [^\w\s]+ gives all the words but not words with space character (as \s suggested)? Also why you put |w+ pattern there ? – Jack2019 Nov 10 '18 at 14:09
  • @SoManyProblems: added an explanation – georg Nov 10 '18 at 17:58