I'm trying to remove punctuation while tokenizing a sentence in python but I have several "condtitions" where I want it to ignore tokenizing using punctuation. Some examples are when I see a URL, or email address or certain symbols without spaces next to them. Example:
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer("[\w']+")
tokenizer.tokenize("please help me ignore punctuation like . or , but at the same time don't ignore if it looks like a url i.e. google.com or google.co.uk. Sometimes I also want conditions where I see an equals sign between words such as myname=shecode")
Right now the output looks like
['please', 'help', 'me', 'ignore', 'punctuation', 'like', 'or', 'but', 'at', 'the', 'same', 'time', "don't", 'ignore', 'if', 'it', 'looks', 'like', 'a', 'url', 'i', 'e', 'google', 'com', 'or', 'google', 'co', 'uk', 'Sometimes', 'I', 'also', 'want', 'conditions', 'where', 'I', 'see', 'an', 'equals', 'sign', 'between', 'words', 'such', 'as', 'myname', 'shecode']
But what I really want it to look like is
['please', 'help', 'me', 'ignore', 'punctuation', 'like', 'or', 'but', 'at', 'the', 'same', 'time', "don't", 'ignore', 'if', 'it', 'looks', 'like', 'a', 'url', 'i', 'e', 'google.com', 'or', 'google.co.uk', 'Sometimes', 'I', 'also', 'want', 'conditions', 'where', 'I', 'see', 'an', 'equals', 'sign', 'between', 'words', 'such', 'as', 'myname=shecode']