He all,
I have a file having some columns. I would like to do sort for column 2 then apply uniq for column 1. I found this post talking about sort and uniq for the same column but my problem is a little bit different. I am thinking of using something using sort
and uniq
but don't know how. Thanks.
4 Answers
You can use pipe, however it's not in place.
Example :
$ cat initial.txt
1,3,4
2,3,1
1,2,3
2,3,4
1,4,1
3,1,3
4,2,4
$ cat initial.txt | sort -u -t, -k1,1 | sort -t, -k2,2
3,1,3
4,2,4
1,3,4
2,3,1
Result is sorted by key 2, unique by key 1. note that result is displayed on the console, if you want it in a file, just use a redirect (> newFiletxt
)
Other solution for this kind of more complex operation is to rely on another tool (depending on your preferences (and age), awk, perl or python)
EDIT: If i understood correctly the new requirement, it's sorted by colum 2, column 1 is unique for a given column 2:
$ cat initial.txt | sort -u -t, -k1,2 | sort -t, -k2,2
3,1,3
1,2,3
4,2,4
1,3,4
2,3,1
1,4,1
Is it what you expect ? Otherwise, I did not understand :-)

- 7,094
- 1
- 25
- 42
-
2Thank Bruce for your answer. hmhm.... but my case will need to have sort first then uniq. What that means is that the first column may have duplicate values but not next to each other. Any more idea? I am a beginner for awk but if you have a good solution for it. I would like to use it. Thanks. – Ken Jun 10 '11 at 05:26
-
Could you provide a data sample and expected result, e.g. on http://pastebin.com ? I'm not sure I fully understand – Bruce Jun 10 '11 at 05:31
-
Cool... this one works for me.. although it is not robust enough for none-consecutive column. It will do for my current task. Thanks heaps. – Ken Jun 10 '11 at 05:40
-
Just to help people understand the arcane syntax (after looking at man sort): -u means the sort is followed by uniqify. -t, means the separator is comma. -k1,2 means the field is between 1 and 2 inclusive; thus -k1,1 is just field 1. – John Jiang Jan 16 '14 at 04:50
Just to be sure that I got what you mean correctly. You want to sort a file based on the second column in the file. Then you want to remove duplicates from the first column (another way of saying applying uniq to column one!). cool, to do this, you need to perform three tasks:
- sort the column on which uniq is going to be applied (since uniq can work only on sorted input).
- apply uniq on the sorted column.
- sort the output based on the values in column two.
Using pipes: The command is
sort -t ',' -k1 fileName| awk '!x[$1]++' | sort -t ',' -k2
Note that you cannot specify the first field in uniq, you can use the -f
switch to jump the first n
fields. Hence, I used awk
to replace uniq
.

- 41
- 2
uniq
needs the data to be in sorted order to work, so if you sort
on second field and then apply uniq
on first field, you won't get correct result.
You may want to try
sort -u -t, -k1,1 filename | sort -t, -k2,2

- 6,956
- 2
- 28
- 40
-
1Thanks Lobo.. but I need to do sort first then find the unique ones in the first column in which there may be duplicate values in column 1 but they won't be next to each other. I am surprise that the uniq command in Linux doesn't have a parameter to specify a specific column. Thanks. – Ken Jun 10 '11 at 05:30
-
`uniq` command does give you option to choose fields. check out `-f`, `s` and other options. Are you looking for `sort -t' ' -k2,2 b | uniq -f1`? Could you provide an example of input and output you are looking for? – Praveen Lobo Jun 10 '11 at 05:40
-
but the `-f` and `-s` will skip the FIRST number of columns/characters for uniqueness comparison. They don't allow a specific columns. Bruce's 2nd answer works for my current task now. Thanks. – Ken Jun 10 '11 at 05:48