131

How can I find the unique lines and remove all duplicates from a file? My input file is

1
1
2
3
5
5
7
7

I would like the result to be:

2
3

sort file | uniq will not do the job. Will show all values 1 time

Chris Seymour
  • 83,387
  • 30
  • 160
  • 202
amprantino
  • 1,597
  • 4
  • 15
  • 15
  • 22
    The file must be sorted first. `sort file | uniq -u` will output to console for you. – ma77c Jul 10 '15 at 19:19
  • I think the reason `sort file | uniq` shows all the values 1 time is because it immediately prints the line it encounters the first time, and for the subsequent encounters, it just skips them. – MrObjectOriented Aug 28 '20 at 19:49

13 Answers13

117

uniq has the option you need:

   -u, --unique
          only print unique lines
$ cat file.txt
1
1
2
3
5
5
7
7
$ uniq -u file.txt
2
3
Lev Levitsky
  • 63,701
  • 20
  • 147
  • 175
39

Use as follows:

sort < filea | uniq > fileb
Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
kasavbere
  • 5,873
  • 14
  • 49
  • 72
  • 2
    This isn't correct, I think you meant: `uniq -u filea > fileb` – Chris Seymour Dec 08 '12 at 14:38
  • 1
    I copy your data and run it and it works: `sortfileb.txt`. Maybe you left out the extensions. I am using a Mac OS X. you have to go from `filea.txt` to some other `fileb.txt` – kasavbere Dec 08 '12 at 14:53
  • 1
    There is no need for the redirection with `sort` and what's the point of piping to `uniq` when you could just do `sort -u file -o file` what you're doing is removing the duplicate values i.e your `fileb` contains `1,2,3,5,7` the OP wants the unique lines only which is `2,3` and is achieved by `uniq -u file` File extension has nothing to with it, your answer is wrong. – Chris Seymour Dec 08 '12 at 15:07
27

You could also print out the unique value in "file" using the cat command by piping to sort and uniq

cat file | sort | uniq -u
Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
octocatsup
  • 271
  • 4
  • 5
19

I find this easier.

sort -u input_filename > output_filename

-u stands for unique.

Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
Anant Mittal
  • 1,923
  • 9
  • 15
18

While sort takes O(n log(n)) time, I prefer using

awk '!seen[$0]++'

awk '!seen[$0]++' is an abbreviation for awk '!seen[$0]++ {print}', print line(=$0) if seen[$0] is not zero. It take more space but only O(n) time.

Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
hychou
  • 572
  • 5
  • 15
14

you can use:

sort data.txt| uniq -u

this sort data and filter by unique values

Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
blacker
  • 768
  • 1
  • 10
  • 13
11

uniq -u has been driving me crazy because it did not work.

So instead of that, if you have python (most Linux distros and servers already have it):

Assuming you have the data file in notUnique.txt

#Python
#Assuming file has data on different lines
#Otherwise fix split() accordingly.

uniqueData = []
fileData = open('notUnique.txt').read().split('\n')

for i in fileData:
  if i.strip()!='':
    uniqueData.append(i)

print uniqueData

###Another option (less keystrokes):
set(open('notUnique.txt').read().split('\n'))

Note that due to empty lines, the final set may contain '' or only-space strings. You can remove that later. Or just get away with copying from the terminal ;)

#

Just FYI, From the uniq Man page:

"Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without 'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'."

One of the correct ways, to invoke with: # sort nonUnique.txt | uniq

Example run:

$ cat x
3
1
2
2
2
3
1
3

$ uniq x
3
1
2
3
1
3

$ uniq -u x
3
1
3
1
3

$ sort x | uniq
1
2
3

Spaces might be printed, so be prepared!

ashmew2
  • 327
  • 3
  • 5
5
uniq -u < file

will do the job.

Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
Shiplu Mokaddim
  • 56,364
  • 17
  • 141
  • 187
3

uniq should do fine if you're file is/can be sorted, if you can't sort the file for some reason you can use awk:

awk '{a[$0]++}END{for(i in a)if(a[i]<2)print i}'
Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
3
sort -d "file name" | uniq -u

this worked for me for a similar one. Use this if it is not arranged. You can remove sort if it is arranged

Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
0

This was the first i tried

skilla:~# uniq -u all.sorted  

76679787
76679787 
76794979
76794979 
76869286
76869286 
......

After doing a cat -e all.sorted

skilla:~# cat -e all.sorted 
$
76679787$
76679787 $
76701427$
76701427$
76794979$
76794979 $
76869286$
76869286 $

Every second line has a trailing space :( After removing all trailing spaces it worked!

thank you

amprantino
  • 1,597
  • 4
  • 15
  • 15
0

Instead of sorting and then using uniq, you could also just use sort -u. From sort --help:

  -u, --unique              with -c, check for strict ordering;
                            without -c, output only the first of an equal run
0

Short, foolproof way:

sort -u file
Moritz Ringler
  • 9,772
  • 9
  • 21
  • 34
dtbarne
  • 8,110
  • 5
  • 43
  • 49