6

How can I remove all lines from a text file (main.txt) by checking a second textfile (removethese.txt). What is an efficient approach if files are greater than 10-100mb. [Using mac]

Example:

main.txt
3
1
2
5

Remove these lines

removethese.txt
3
2
9

Output:

output.txt
1
5

Example Lines (these are the actual lines I'm working with - order does not matter):

ChIJW3p7Xz8YyIkRBD_TjKGJRS0
ChIJ08x-0kMayIkR5CcrF-xT6ZA
ChIJIxbjOykFyIkRzugZZ6tio1U
ChIJiaF4aOoEyIkR2c9WYapWDxM
ChIJ39HoPKDix4kRcfdIrxIVrqs
ChIJk5nEV8cHyIkRIhmxieR5ak8
ChIJs9INbrcfyIkRf0zLkA1NJEg
ChIJRycysg0cyIkRArqaCTwZ-E8
ChIJC8haxlUDyIkRfSfJOqwe698
ChIJxRVp80zpcEARAVmzvlCwA24
ChIJw8_LAaEEyIkR68nb8cpalSU
ChIJs35yqObit4kR05F4CXSHd_8
ChIJoRmgSdwGyIkRvLbhOE7xAHQ
ChIJaTtWBAWyVogRcpPDYK42-Nc
ChIJTUjGAqunVogR90Kc8hriW8c
ChIJN7P2NF8eVIgRwXdZeCjL5EQ
ChIJizGc0lsbVIgRDlIs85M5dBs
ChIJc8h6ZqccVIgR7u5aefJxjjc
ChIJ6YMOvOeYVogRjjCMCL6oQco
ChIJ54HcCsaeVogRIy9___RGZ6o
ChIJif92qn2YVogR87n0-9R5tLA
ChIJ0T5e1YaYVogRifrl7S_oeM8
ChIJwWGce4eYVogRcrfC5pvzNd4
Community
  • 1
  • 1
Onichan
  • 4,476
  • 6
  • 31
  • 60
  • 3
    The goal is that you add some code of your own to show at least the research effort you made to solve this yourself. – Norbert Jun 13 '15 at 16:41
  • 2
    Take a look at `grep`'s man page. – Cyrus Jun 13 '15 at 16:56
  • @NorbertvanNobelen I tried several solutions including this one: http://stackoverflow.com/questions/25954013/how-to-remove-both-matching-lines-while-removing-duplicates, but my lists are large and unsorted. But the solution doesn't filter lists. – Onichan Jun 14 '15 at 05:05
  • With text editor UltraEdit available for Windows, Linux and Mac it can be done using an UE script, see [Delete lines in the active document containing strings listed in another file](https://www.ultraedit.com/forums/viewtopic.php?f=52&t=7549). – Mofi Jun 15 '15 at 17:45
  • @mofi does the UltraEdit handle large files (10-100mb files)? – Onichan Jun 16 '15 at 03:28
  • @Emma UltraEdit can edit files of any size, even files with more than 4 GB on an x86 machine having just 2 GB RAM. But UltraEdit is by default configured for small file editing. For editing large and huge files some features should be disabled, see IDM power tip [Large file text editor](http://www.ultraedit.com/support/tutorials_power_tips/ultraedit/large_file_handling.html). – Mofi Jun 16 '15 at 05:49
  • Fast version with `grep`: `LC_ALL=C fgrep -vxf removethese.txt main.txt >output.txt`. The main points being: (1) use `fgrep` instead of plain `grep`; (2) use `-x` to match entire lines; (3) disable locales. – lcd047 Jun 16 '15 at 07:29
  • @icd047 ill give this a try. Will using `fgrep` make this significantly faster? – Onichan Jun 16 '15 at 15:53
  • `fgrep` is the same as `grep -F` – Jahid Jun 16 '15 at 16:36
  • @mofi I have a list of solutions to try (UltraEdit included). I added sample lines to the question. To answer your question the lines removed should 100% identical in case and length. Would you mind adding an answer with your UltraEdit solution? – Onichan Jun 16 '15 at 17:58
  • You say order does not matter. You could start with eliminating duplicates in both removethese.txt as well as main.txt by using `sort -u` – Happy Green Kid Naps Jun 19 '15 at 02:40
  • If 'removethese.txt' is fixed, or its a 1 off thing, I would break apart the find file into 2 MB chunks, create a multi-level trie regex for each chunk using this tool http://www.regexformat.com and run each one against 'main.txt'. I would bet its the fastest way to do it. –  Jun 19 '15 at 19:49
  • Ex: `ChIJ(?:0(?:8x-0kMayIkR5CcrF-xT6ZA|T5e1YaYVogRifrl7S_oeM8)|39HoPKDix4kRcfdIrxIVrqs|54HcCsaeVogRIy9___RGZ6o|6YMOvOeYVogRjjCMCL6oQco|C8haxlUDyIkRfSfJOqwe698|IxbjOykFyIkRzugZZ6tio1U|N7P2NF8eVIgRwXdZeCjL5EQ|Rycysg0cyIkRArqaCTwZ-E8|TUjGAqunVogR90Kc8hriW8c|W3p7Xz8YyIkRBD_TjKGJRS0|aTtWBAWyVogRcpPDYK42-Nc|c8h6ZqccVIgR7u5aefJxjjc|i(?:aF4aOoEyIkR2c9WYapWDxM|f92qn2YVogR87n0-9R5tLA|zGc0lsbVIgRDlIs85M5dBs)|k5nEV8cHyIkRIhmxieR5ak8|oRmgSdwGyIkRvLbhOE7xAHQ|s(?:35yqObit4kR05F4CXSHd_8|9INbrcfyIkRf0zLkA1NJEg)|w(?:8_LAaEEyIkR68nb8cpalSU|WGce4eYVogRcrfC5pvzNd4)|xRVp80zpcEARAVmzvlCwA24)` –  Jun 19 '15 at 19:49
  • The other option is code. And probably the better option if this is something you will do a lot. For that you'd probably need C++. Generally, dynamically create a ternary tree from the removethese file. Read in the main.txt file line by line, search tree. Simple, you're done !! –  Jun 19 '15 at 19:59

4 Answers4

13

There are two standard ways to do this:

With grep:

grep -vxFf removethese main

This uses:

  • -v to invert the match.
  • -x match whole line, to prevent, for example, he to match lines like hello or highway to hell.
  • -F to use fixed strings, so that the parameter is taken as it is, not interpreted as a regular expression.
  • -f to get the patterns from another file. In this case, from removethese.

With awk:

$ awk 'FNR==NR {a[$0];next} !($0 in a)' removethese main
1
5

Like this we store every line in removethese in an array a[]. Then, we read the main file and just print those lines that are not present in the array.

fedorqui
  • 275,237
  • 103
  • 548
  • 598
  • I previously tried `grep -vFf removethese.txt main.txt >output.txt` from the other answer and it took 4-6 hours to process a 7mb file. Do you think `awk` would yield a significantly faster result? – Onichan Jun 16 '15 at 15:50
  • @Emma it is difficult to say because I don't have any sample of data, neither a vision on what file is bigger. You can give a try with a small sample and compare the two approaches. Note I am also using `-x` in the `grep` approach. – fedorqui Jun 16 '15 at 15:52
  • I added some sample lines to the question. I will give this method a try once my process completes for one of the other answers. Would it help for me to add a sample text file? – Onichan Jun 16 '15 at 17:55
  • @Emma What is the proportion of size between the two files? `awk` can be extremely fast if the `removethese` file is not very big, meaning you don't have to keep in memory a lot of stuff. – fedorqui Jun 17 '15 at 08:19
  • the `removethese` file is fairly large. often half the size of the `main` file. It can range from 5mb to 50mb. – Onichan Jun 17 '15 at 15:54
  • @Emma If you have RAM enough to store all this info in memory (which should be the case, according to what you comment in the other answer), this `awk` would be the best. I was recently astonished on how fast it processes data in [How to get the biggest number in a file?](http://stackoverflow.com/q/30592249/1983854) – fedorqui Jun 18 '15 at 07:35
  • 1
    awk is quiquer than grep ! – m3asmi Jul 02 '15 at 10:36
5

With grep:

grep -vxFf removethese.txt main.txt >output.txt

With fgrep:

fgrep -vxf removethese.txt main.txt >output.txt

fgrep is deprecated. fgrep --help says:

Invocation as 'fgrep' is deprecated; use 'grep -F' instead.

With awk (from @fedorqui):

awk 'FNR==NR {a[$0];next} !($0 in a)' removethese.txt main.txt >output.txt

With sed:

sed "s=^=/^=;s=$=$/d=" removethese.txt | sed -f- main.txt >output.txt

This will fail if removethese.txt contains special chars. For that you can do:

sed 's/[^^]/[&]/g; s/\^/\\^/g' removethese.txt >newremovethese.txt

and use this newremovethese.txt in the sed command. But this is not worth the effort, it's too slow compared to the other methods.


Test performed on the above methods:

The sed method takes too much time and not worth testing.

Files Used:

removethese.txt : Size: 15191908 (15MB)     Blocks: 29672   Lines: 100233
main.txt : Size: 27640864 (27.6MB)      Blocks: 53992   Lines: 180034

Commands:
grep -vxFf | fgrep -vxf | awk

Taken Time:
0m7.966s | 0m7.823s | 0m0.237s
0m7.877s | 0m7.889s | 0m0.241s
0m7.971s | 0m7.844s | 0m0.234s
0m7.864s | 0m7.840s | 0m0.251s
0m7.798s | 0m7.672s | 0m0.238s
0m7.793s | 0m8.013s | 0m0.241s

AVG
0m7.8782s | 0m7.8468s | 0m0.2403s

This test result implies that fgrep is a little bit faster than grep.

The awk method (from @fedorqui) passes the test with flying colors (0.2403 seconds only !!!).

Test Environment:

HP ProBook 440 G1 Laptop
8GB RAM
2.5GHz processor with turbo boost upto 3.1GHz
RAM being used: 2.1GB
Swap being used: 588MB
RAM being used when the grep/fgrep command is run: 3.5GB
RAM being used when the awk command is run: 2.2GB or less
Swap being used when the commands are run: 588MB (No change)

Test Result:

Use the awk method.

Jahid
  • 21,542
  • 10
  • 90
  • 108
  • @shellter would you mind suggesting a simpler way to do this? – Onichan Jun 14 '15 at 05:05
  • @shellter yap I tested it (and it works)... But your suggestion works great, it can be done with a single line command..awesome.. thanks... – Jahid Jun 14 '15 at 13:48
  • If your original solution worked, you should leave it in your answer. I'm skeptical that `echo $(grep something output.txt) > output.txt` won't overwrite the existing content of output.txt, but it was late, maybe it would work. In any case, I would recommend the `grep -vFf ...` solution. Good luck to all! – shellter Jun 14 '15 at 13:55
  • @shellter even if it worked the later is better. And for clarification, the previous grep was overwriting the contents and that was intentional... – Jahid Jun 14 '15 at 14:00
  • @shellter No, the output.txt is modified (`sed -i`). It won't print any output to screen, it will delete those lines inplace in `output.txt`. – Jahid Jun 14 '15 at 14:03
  • @shelter I've tried the `grep` method on a 8mb file and it took over 5 hours. Is there a chance that this is inefficient for large files? – Onichan Jun 16 '15 at 00:19
  • @Emma what about the sed solution with while loop ? – Jahid Jun 16 '15 at 00:25
  • @jahid I was about to give that a try. But each trial takes up several hours. Do you think it will be drastically more efficient? – Onichan Jun 16 '15 at 00:36
  • @Emma I can't say without testing.. unfortunately I don't have a text file that large at hand...:D – Jahid Jun 16 '15 at 00:40
  • @Emma I created a removethese.txt (15MB) and main.txt (27MB). The grep method took 7.525s. Don't use the sed method, it takes way more time. – Jahid Jun 16 '15 at 15:54
  • @jahid interesting how quickly yours finished. I'm using 10mb ram macbook pro (mid 2012) and process took over 5 hours. Any idea why there's such a big difference? I updated the question with the typical line. – Onichan Jun 16 '15 at 17:52
  • @Emma added test results for various methods and found the `awk` method to be the fastest... – Jahid Jun 16 '15 at 18:40
3

Here are a lot of the simple and effective solutions I've found: http://www.catonmat.net/blog/set-operations-in-unix-shell-simplified/

You need to use one of Set Complement bash commands. 100MB files can be solved within seconds or minutes.

Set Membership

$ grep -xc 'element' set    # outputs 1 if element is in set
                            # outputs >1 if set is a multi-set
                            # outputs 0 if element is not in set

$ grep -xq 'element' set    # returns 0 (true)  if element is in set
                            # returns 1 (false) if element is not in set

$ awk '$0 == "element" { s=1; exit } END { exit !s }' set
# returns 0 if element is in set, 1 otherwise.

$ awk -v e='element' '$0 == e { s=1; exit } END { exit !s }'

Set Equality

$ diff -q <(sort set1) <(sort set2) # returns 0 if set1 is equal to set2
                                    # returns 1 if set1 != set2

$ diff -q <(sort set1 | uniq) <(sort set2 | uniq)
# collapses multi-sets into sets and does the same as previous

$ awk '{ if (!($0 in a)) c++; a[$0] } END{ exit !(c==NR/2) }' set1 set2
# returns 0 if set1 == set2
# returns 1 if set1 != set2

$ awk '{ a[$0] } END{ exit !(length(a)==NR/2) }' set1 set2
# same as previous, requires >= gnu awk 3.1.5

Set Cardinality

$ wc -l set | cut -d' ' -f1    # outputs number of elements in set

$ wc -l < set

$ awk 'END { print NR }' set

Subset Test

$ comm -23 <(sort subset | uniq) <(sort set | uniq) | head -1
# outputs something if subset is not a subset of set
# does not putput anything if subset is a subset of set

$ awk 'NR==FNR { a[$0]; next } { if !($0 in a) exit 1 }' set subset
# returns 0 if subset is a subset of set
# returns 1 if subset is not a subset of set

Set Union

$ cat set1 set2     # outputs union of set1 and set2
                    # assumes they are disjoint

$ awk 1 set1 set2   # ditto

$ cat set1 set2 ... setn   # union over n sets

$ cat set1 set2 | sort -u  # same, but assumes they are not disjoint

$ sort set1 set2 | uniq

# sort -u set1 set2

$ awk '!a[$0]++'           # ditto

Set Intersection

$ comm -12 <(sort set1) <(sort set2)  # outputs insersect of set1 and set2

$ grep -xF -f set1 set2

$ sort set1 set2 | uniq -d

$ join <(sort -n A) <(sort -n B)

$ awk 'NR==FNR { a[$0]; next } $0 in a' set1 set2

Set Complement

$ comm -23 <(sort set1) <(sort set2)
# outputs elements in set1 that are not in set2

$ grep -vxF -f set2 set1           # ditto

$ sort set2 set2 set1 | uniq -u    # ditto

$ awk 'NR==FNR { a[$0]; next } !($0 in a)' set2 set1

Set Symmetric Difference

$ comm -3 <(sort set1) <(sort set2) | sed 's/\t//g'
# outputs elements that are in set1 or in set2 but not both

$ comm -3 <(sort set1) <(sort set2) | tr -d '\t'

$ sort set1 set2 | uniq -u

$ cat <(grep -vxF -f set1 set2) <(grep -vxF -f set2 set1)

$ grep -vxF -f set1 set2; grep -vxF -f set2 set1

$ awk 'NR==FNR { a[$0]; next } $0 in a { delete a[$0]; next } 1;
       END { for (b in a) print b }' set1 set2

Power Set

$ p() { [ $# -eq 0 ] && echo || (shift; p "$@") |
        while read r ; do echo -e "$1 $r\n$r"; done }
$ p `cat set`

# no nice awk solution, you are welcome to email me one:
# peter@catonmat.net

Set Cartesian Product

$ while read a; do while read b; do echo "$a, $b"; done < set1; done < set2

$ awk 'NR==FNR { a[$0]; next } { for (i in a) print i, $0 }' set1 set2

Disjoint Set Test

$ comm -12 <(sort set1) <(sort set2)  # does not output anything if disjoint

$ awk '++seen[$0] == 2 { exit 1 }' set1 set2 # returns 0 if disjoint
                                         # returns 1 if not

Empty Set Test

$ wc -l < set            # outputs 0  if the set is empty
                         # outputs >0 if the set is not empty

$ awk '{ exit 1 }' set   # returns 0 if set is empty, 1 otherwise

Minimum

$ head -1 <(sort set)    # outputs the minimum element in the set

$ awk 'NR == 1 { min = $0 } $0 < min { min = $0 } END { print min }'

Maximum

$ tail -1 <(sort set)    # outputs the maximum element in the set

$ awk '$0 > max { max = $0 } END { print max }'
k06a
  • 17,755
  • 10
  • 70
  • 110
2

I like @fedorqui's use of awk for setups where one has enough memory to fit all the "remove these" lines: a concise expression of an in-memory approach.

But for a scenario where the size of the lines to remove is large relative to current memory, and reading that data into an in-memory data structure is an invitation to fail or thrash, consider an ancient approach: sort/join

sort main.txt > main_sorted.txt
sort removethese.txt > removethese_sorted.txt

join -t '' -v 1 main_sorted.txt removethese_sorted.txt > output.txt

Notes:

  • this does not preserve the order from main.txt: lines in output.txt will be sorted
  • it requires enough disk to be present to let sort do its thing (temp files), and store same-size sorted versions of the input files
  • having join's -v option do just what we want here - print "unpairable" from file 1, drop matches - is a bit of serendipity
  • it does not directly address locales, collating, keys, etc. - it relies on defaults of sort and join (-t with an empty argument) to match sort order, which happen to work on my current machine
sjnarv
  • 2,334
  • 16
  • 13