16

I have a file like this:

80.13.178.2
80.13.178.2
80.13.178.2
80.13.178.2
80.13.178.1
80.13.178.3
80.13.178.3
80.13.178.3
80.13.178.4
80.13.178.4
80.13.178.7

I need to display unique entries for repeated line (similar to uniq -d) but only entries that occur more than just twice (twice being an example so flexibility to define the lower limit.)

Output for this example should be like this when looking for entries with three or more occurrences:

80.13.178.2
80.13.178.3
Chilledrat
  • 2,593
  • 3
  • 28
  • 38
Andrew Kennen
  • 163
  • 1
  • 1
  • 6

2 Answers2

24

Feed the output from uniq -cd to awk

sort test.file | uniq -cd | awk -v limit=2 '$1 > limit{print $2}'
iruvar
  • 22,736
  • 7
  • 53
  • 82
11

With pure awk:

awk '{a[$0]++}END{for(i in a){if(a[i] > 2){print i}}}' a.txt 

It iterates over the file and counts the occurances of every IP. At the end of the file it outputs every IP which occurs more than 2 times.

hek2mgl
  • 152,036
  • 28
  • 249
  • 266