13

I sometimes need to compare two text files. Obviously, diff shows the differences, it also hides the similarities, which is kind of the point.

Suppose I want to do other comparisons on these files: set union, intersection, and subtraction, treating each line as an element in the set.

Are there similarly simple common utilities or one-liners which can do this?


Examples:

a.txt

john
mary

b.txt

adam
john

$> set_union a.txt b.txt
john
mary
adam

$> set_intersection a.txt b.txt
john

$> set_difference a.txt b.txt
mary
spraff
  • 32,570
  • 22
  • 121
  • 229

4 Answers4

20

Union: sort -u files...

Intersection: sort files... | uniq -d

Overall difference (elements which are just in one of the files):
sort files... | uniq -u

Mathematical difference (elements only once in one of the files):
sort files... | uinq -u | sort - <(sort -u fileX ) | uniq -d

The first two commands get me all unique elements. Then we merge this with the file we're interested in. Command breakdown for sort - <(sort -u fileX ):

The - will process stdin (i.e. the list of all unique elements).

<(...) runs a command, writes the output in a temporary file and passes the path to the file to the command.

So this gives is a mix of all unique elements plus all unique elements in fileX. The duplicates are then the unique elements which are only in fileX.

Aaron Digulla
  • 321,842
  • 108
  • 597
  • 820
  • The last answer does not compute the set difference, it gives the entries unique in *either* file rather than unique to the *first* file (a true set difference). To get the true set difference for file1: `sort | uinq -u | sort - | uniq -d`. Note that the `<>` are not actually part of the code and that the `-` is needed for the second sort as it tells it to include the output of stdin (in this case the result of uniq -u). Also, file1 needs to be a part of `` – Cole May 27 '20 at 20:01
  • I think we disagree what "difference" means. And there is a problem with your code: If file1 contains `a,a,x`, that would print `a` even though it's not unique. The underlying problem is that the file isn't really a set. You probably need to resort to BASH magic like `sort - <(sort -u )` – Aaron Digulla Jun 19 '20 at 09:28
  • Ah, good catch there. You are right, the contents of `file1` need to be made unique when passed in the second time, otherwise you will get anything duplicated in `file1` and unique with respect to the larger comparison. – Cole Jun 23 '20 at 22:39
7

If you want to get the common lines between two files, you can use the comm utility.

A.txt :

A
B
C

B.txt

A
B
D

and then, using comm will give you :

$ comm <(sort A.txt) <(sort B.txt)
        A
        B
C
    D

In the first column, you have what is in the first file and not in the second.

In the second column, you have what is in the second file and not in the first.

In the third column, you have what is in the both files.

Cédric Julien
  • 78,516
  • 15
  • 127
  • 132
0

I can't comment on Aaron Digulla's answer, which despite being accepted does not actually compute the set difference.

The set difference A\B with the given inputs should only return mary, but the accepted answer also incorrectly returns adam.

This answer has an awk one-liner that correctly computes the set difference:

awk 'FNR==NR {a[$0]++; next} !a[$0]' b.txt a.txt
curby
  • 41
  • 1
0

If you don't mind using a bit of Perl, and if your file sizes are reasonable such that they can be written into a hash, you could collect the files into two hashes to do:

#...get common keys in an array...
my @both_things
for (keys %from_1) {
    push @both_things, $_ if exists $from_2{$_};
}

#...put unique things in an array...
my @once_only
for (keys %from_1) {
    push @once_only, $_ unless exists $from_2($_);
}
JRFerguson
  • 7,426
  • 2
  • 32
  • 36