-1

I have a txt file with different lines:

aaaaaa
bbbbbb
cccccc
ababab
ababab
ababab
ababab

And I want to remove duplicates butget the first value: expected

aaaaaa
bbbbbb
cccccc
ababab

I've tried sort file | uniq -u but the expected result did not include the sole coincidence for the row ababab

Any help?

Mia Lua
  • 157
  • 1
  • 11
  • solved, I've just use `sort file | uniq` and it works! – Mia Lua Jul 13 '18 at 08:10
  • 1
    Just omit the `-u` and you'll get what you want. And its `uniq`, not `unique` :-) – boojum Jul 13 '18 at 08:11
  • 1
    Possible duplicate of [How can I delete duplicate lines in a file in Unix?](https://stackoverflow.com/q/1444406/608639), [Remove duplicate entries using a Bash script](https://stackoverflow.com/q/9377040/608639), [How to remove duplicate files using bash](https://unix.stackexchange.com/q/192701/56041), [How to remove duplicated files in a directory?](https://superuser.com/q/386199/173513), etc. – jww Jul 13 '18 at 09:37

2 Answers2

0

one way is:

 awk '!a[$0]++' file

Note that, using sort|uniq may change the order of the lines in the file. The awk line above won't.

Kent
  • 189,393
  • 32
  • 233
  • 301
0

Use just uniq

$ cat file
aaaaaa
bbbbbb
cccccc
ababab
ababab
ababab
ababab

$ cat file | uniq
aaaaaa
bbbbbb
cccccc
ababab

$ sort file | uniq -u
aaaaaa
bbbbbb
cccccc
3sky
  • 830
  • 1
  • 7
  • 16