1

I want to match strings from a pattern file to look into Source.txt file.

pattern_list.txt has 139k lines

Source.txt more than 5 millions lines

If I use grep like this it tooks 2 seconds to get the output.

grep -F -f pattern_list.txt Source.txt > Output.txt

But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.

awk 'NR==FNR {a[$1]; next} {

for (i in a) if ($0 ~ i) print $0

 }' FS=, OFS=, pattern_list.txt Source.txt > Output.txt

pattern_list is like this

21051
99888
95746

and source.txt like this

72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694

What it wrong with my AWK script?

I'm running on Cygwin in Windows.

codeforester
  • 39,467
  • 16
  • 112
  • 140
Ger Cas
  • 2,188
  • 2
  • 18
  • 45
  • 3
    Another approach: `join -t "," <(sort pattern_list) <(sort source.txt)` – Cyrus Nov 10 '18 at 21:10
  • Possible duplicate of [Fastest way to find lines of a file from another larger file in Bash](https://stackoverflow.com/questions/42239179/fastest-way-to-find-lines-of-a-file-from-another-larger-file-in-bash) – codeforester Nov 11 '18 at 07:06
  • @codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools. – Ger Cas Nov 11 '18 at 12:44
  • Since your `awk` code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related. – codeforester Nov 11 '18 at 17:23

2 Answers2

2

If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).

If I were you, I'd partition the source file, then search each part:

$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*
Rafael
  • 7,605
  • 13
  • 31
  • 46
  • Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here. – Ger Cas Nov 10 '18 at 21:06
  • 1
    @GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose. – Rafael Nov 10 '18 at 21:09
2

if you are doing literal match this should be faster than your approach

$ awk -F, 'NR==FNR{a[$0]; next} $1 in a{print $1,$3,$8,$20}' pattern_list source > output

However, I think sort/join will still be faster than grep and awk.

karakfa
  • 66,216
  • 7
  • 41
  • 56
  • Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings? – Ger Cas Nov 10 '18 at 23:10
  • Can't improve on karakfa's answer, but for grep vs awk performance tests see https://www.polydesmida.info/BASHing/2018-10-24.html – user2138595 Nov 11 '18 at 06:57
  • @karakfa Thanks a lot. It works exactly as I wanted. – Ger Cas Nov 11 '18 at 12:36
  • @user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed. – Ger Cas Nov 11 '18 at 12:37