0

I have 2 set of data.

First one which serves as dictionary has two columns keyword and id and 180000 rows. Below is some sample data.

Also, note that some of the keyword are as small as 2 character and as big as 700 characters and there is no fixed length of the keywords. Although id has fixed pattern of 3 digit number with a hash symbol before and after the number.

keyword         id
salesman        #123#
painter         #486#
senior painter  #215#

Second file has one column which is corpus and it runs into 22 million records and length of each record varies between 10 to 1000. Below is sample data which can be considered as input.

corpus
I am working as a salesman. salesmanship is not my forte, however i have become a good at it
I have been a painter since i was 19
are you the salesman?

Output

corpus
I am working as a #123#. salesmanship is not my forte, however i have become a good at it
I have been a #486# since i was 19
are you the #123#?

Please note that i want to replace complete word and not overlapped words. so in the first sentence salesman was replaced with #123# where as salesmanship was not replaced with #123#ship. This requires me to add regular expression '\b' before and after the keyword. This is why Regex is important for the search function

So this is a search and replace operation for multi-million rows and has regex. I have read Mass string replace in python? and Speed up millions of regex replacements in Python 3, however it is taking me days to do this find and replace, which i can't afford as this is a weekly task. I want to be able to do this much faster. Below is my code

Id = df_dict.Id.tolist()
#convert to list with regex
keyword = [r'\b'+ x + r'\b' for x in df_dict.keyword]
#light on memory to clean file
del df_dict
#replace
df_corpus["corpus_text"].replace(keyword, Id, regex=False,inplace=True)
StatguyUser
  • 2,595
  • 2
  • 22
  • 45
  • Parallelize it using `multiprocessing`. If you have an 8-core machine you should be able to split the job into 8 or more parts and do them concurrently. – John Zwinck Jun 03 '17 at 12:32
  • I have 4 core machine with 8 GB RAM, hence not the best option – StatguyUser Jun 03 '17 at 12:34
  • Is writing C, C++, Java, etc an option? – John Zwinck Jun 03 '17 at 12:35
  • I only know R, Python – StatguyUser Jun 03 '17 at 12:35
  • If you check closely df_dict.Id.tolist() is pandas code df_dict is dataframe Id is name of column and tolist() is used like data_frame.column_name.tolist() – StatguyUser Jun 03 '17 at 13:42
  • Out of curiosity, how are you mapping 180,000 rows to no more than 1000 IDs (because each ID is only 3 digits)? – supersam654 Jun 03 '17 at 15:28
  • This is just an example, real data has 9 numbers. – StatguyUser Jun 03 '17 at 15:30
  • I've never used the algorithm or the library, but you should look into the [Aho-Corasick algorithm](https://en.wikipedia.org/wiki/Aho-Corasick_algorithm) and the appropriately named [pyahocorasic](https://en.wikipedia.org/wiki/Aho-Corasick_algorithm) library. – supersam654 Jun 03 '17 at 15:43
  • @supersam654 there isn't any documentation on find and replace, although there is documentation for find operation https://pypi.python.org/pypi/pyahocorasick/1.0.0#example – StatguyUser Jun 03 '17 at 16:46

0 Answers0