0

I have a file of strings which are address and some value as below. It has duplicates by addresses, I need to remove all duplicates and leave all strings with unique addresses.

In-text:

A0:E6:F8:48:F0:3F BB A0:E6:F8:48:87:D7 B6 A0:E6:F8:48:F1:AF B9 A0:E6:F8:48:36:EB B5 A0:E6:F8:48:32:94 B5 A0:E6:F8:48:38:6F AF A0:E6:F8:48:6C:FC B7 A0:E6:F8:48:31:6E B6 A0:E6:F8:48:87:DA B0 A0:E6:F8:48:F0:3F B1 A0:E6:F8:48:F1:AF B1 A0:E6:F8:48:6C:FC BA A0:E6:F8:48:31:6E B5

out-text:

A0:E6:F8:48:F0:3F BB A0:E6:F8:48:87:D7 B6 A0:E6:F8:48:F1:AF B9 A0:E6:F8:48:36:EB B5 A0:E6:F8:48:32:94 B5 A0:E6:F8:48:38:6F AF A0:E6:F8:48:6C:FC B7 A0:E6:F8:48:31:6E B6 A0:E6:F8:48:87:DA B0 This should work on all address of kind XX:XX:XX:XX:XX:XX

Ruslan Gerasimov
  • 1,752
  • 1
  • 13
  • 20
  • `sort` would also have worked fine : `sort -k1,1 -u` defines a sorting key on the first field (space-separated) then asks for unique output on this key – Aaron Jul 20 '17 at 13:18

1 Answers1

1

If you want to get unique records on basis of first field then try following.

awk '!a[$1]++'   Input_file

I am creating here an array named a where I am checking condition if any line's first field is NOT present into array a then print the line and increment it by 1(that specific 1st field) so that next time that entry will be eliminated from the printing it.

RavinderSingh13
  • 130,504
  • 14
  • 57
  • 93