172

I have an array in Perl:

my @my_array = ("one","two","three","two","three");

How do I remove the duplicates from the array?

mpapec
  • 50,217
  • 8
  • 67
  • 127
David
  • 14,047
  • 24
  • 80
  • 101

11 Answers11

178

You can do something like this as demonstrated in perlfaq4:

sub uniq {
    my %seen;
    grep !$seen{$_}++, @_;
}

my @array = qw(one two three two three);
my @filtered = uniq(@array);

print "@filtered\n";

Outputs:

one two three

If you want to use a module, try the uniq function from List::MoreUtils

Miller
  • 34,962
  • 4
  • 39
  • 60
Greg Hewgill
  • 951,095
  • 183
  • 1,149
  • 1,285
  • 30
    please don't use $a or $b in examples as they are the magic globals of sort() – szabgab Sep 17 '08 at 07:50
  • 2
    It's a `my` lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen. – ephemient Jan 18 '10 at 17:51
  • 2
    @ephemient yes, but if you were to add sorting in this function then it would trump `$::a` and `$::b`, wouldn't it? – vol7ron Feb 21 '12 at 16:45
  • 2
    @szabgab, if that's the case, that's an incredibly bad design decision for `sort` to use non-local variables. – Brian Vandenberg Jun 14 '12 at 21:12
  • 7
    @BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated. – szabgab Jun 25 '12 at 08:19
  • 20
    `sub uniq { my %seen; grep !$seen{$_}++, @_ }` is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils. – ikegami Nov 06 '12 at 18:51
  • 1
    @vol7tron means "backward" compatible, sorry it was bugging me ;-) – Tyler Aug 29 '15 at 05:17
  • 1
    Perl v5.26.0 onwards, `List::Util` has `uniq` , so MoreUtils wouldn't be needed – Sundeep Oct 30 '20 at 08:33
  • Will this keep the order of selected items unchanged? Can be this method used for a sorted array to preserve the order? – Ωmega Jan 27 '23 at 13:37
127

The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked:

% perldoc -q duplicate

The answer, copy and pasted from the output of the command above, appears below:


Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod

How can I remove duplicate elements from a list or array? (contributed by brian d foy)

Use a hash. When you think the words "unique" or "duplicated", think "hash keys".

If you don't care about the order of the elements, you could just create the hash then extract the keys. It's not important how you create that hash: just that you use "keys" to get the unique elements.

   my %hash   = map { $_, 1 } @array;
   # or a hash slice: @hash{ @array } = ();
   # or a foreach: $hash{$_} = 1 foreach ( @array );

   my @unique = keys %hash;

If you want to use a module, try the "uniq" function from "List::MoreUtils". In list context it returns the unique elements, preserving their order in the list. In scalar context, it returns the number of unique elements.

   use List::MoreUtils qw(uniq);

   my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7
   my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7

You can also go through each element and skip the ones you've seen before. Use a hash to keep track. The first time the loop sees an element, that element has no key in %Seen. The "next" statement creates the key and immediately uses its value, which is "undef", so the loop continues to the "push" and increments the value for that key. The next time the loop sees that same element, its key exists in the hash and the value for that key is true (since it's not 0 or "undef"), so the next skips that iteration and the loop goes to the next element.

   my @unique = ();
   my %seen   = ();

   foreach my $elem ( @array )
   {
     next if $seen{ $elem }++;
     push @unique, $elem;
   }

You can write this more briefly using a grep, which does the same thing.

   my %seen = ();
   my @unique = grep { ! $seen{ $_ }++ } @array;
brian d foy
  • 129,424
  • 31
  • 207
  • 592
John Siracusa
  • 14,971
  • 7
  • 42
  • 54
71

Install List::MoreUtils from CPAN

Then in your code:

use strict;
use warnings;
use List::MoreUtils qw(uniq);

my @dup_list = qw(1 1 1 2 3 4 4);

my @uniq_list = uniq(@dup_list);
Coding Minds
  • 3
  • 1
  • 3
Ranguard
  • 711
  • 4
  • 3
  • 5
    The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't) – yPhil Mar 19 '12 at 02:00
  • 3
    @Ranguard: `@dup_list` should be inside the `uniq` call, not `@dups` – incutonez Nov 11 '13 at 14:48
  • @yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that. – Francisco Zarabozo Jun 27 '17 at 14:38
  • 1
    Perl v5.26.0 onwards, `List::Util` has `uniq` , so MoreUtils wouldn't be needed – Sundeep Oct 30 '20 at 08:32
24

My usual way of doing this is:

my %unique = ();
foreach my $item (@myarray)
{
    $unique{$item} ++;
}
my @myuniquearray = keys %unique;

If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list.

Chankey Pathak
  • 21,187
  • 12
  • 85
  • 133
Xetius
  • 44,755
  • 24
  • 88
  • 123
11

Can be done with a simple Perl one-liner.

my @in=qw(1 3 4  6 2 4  3 2 6  3 2 3 4 4 3 2 5 5 32 3); #Sample data 
my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM
print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order.

The PFM block does this:

Data in @in is fed into map. map builds an anonymous hash. keys are extracted from the hash and feed into @out

Wolf
  • 9,679
  • 7
  • 62
  • 108
Hawk
  • 551
  • 8
  • 10
9

Method 1: Use a hash

Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array.

my @unique = keys {map {$_ => 1} @array};

Method 2: Extension of method 1 for reusability

Better to make a subroutine if we are supposed to use this functionality multiple times in our code.

sub get_unique {
    my %seen;
    grep !$seen{$_}++, @_;
}
my @unique = get_unique(@array);

Method 3: Use module List::MoreUtils

use List::MoreUtils qw(uniq);
my @unique = uniq(@array);
Kamal Nayan
  • 1,890
  • 21
  • 34
8

The variable @array is the list with duplicate elements

%seen=();
@unique = grep { ! $seen{$_} ++ } @array;
brian d foy
  • 129,424
  • 31
  • 207
  • 592
Sreedhar
  • 81
  • 1
  • 1
4

That last one was pretty good. I'd just tweak it a bit:

my @arr;
my @uniqarr;

foreach my $var ( @arr ){
  if ( ! grep( /$var/, @uniqarr ) ){
     push( @uniqarr, $var );
  }
}

I think this is probably the most readable way to do it.

jh314
  • 27,144
  • 16
  • 62
  • 82
1

Previous answers pretty much summarize the possible ways of accomplishing this task.

However, I suggest a modification for those who don't care about counting the duplicates, but do care about order.

my @record = qw( yeah I mean uh right right uh yeah so well right I maybe );
my %record;
print grep !$record{$_} && ++$record{$_}, @record;

Note that the previously suggested grep !$seen{$_}++ ... increments $seen{$_} before negating, so the increment occurs regardless of whether it has already been %seen or not. The above, however, short-circuits when $record{$_} is true, leaving what's been heard once 'off the %record'.

You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys:

...
grep !(exists $record{$_} || undef $record{$_}), @record;

That, however, might lead to some confusion.

And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned:

...
undef @record{@record};
keys %record; # your record, now probably scrambled but at least deduped
YenForYang
  • 2,998
  • 25
  • 22
0

Try this, seems the uniq function needs a sorted list to work properly.

use strict;

# Helper function to remove duplicates in a list.
sub uniq {
  my %seen;
  grep !$seen{$_}++, @_;
}

my @teststrings = ("one", "two", "three", "one");

my @filtered = uniq @teststrings;
print "uniq: @filtered\n";
my @sorted = sort @teststrings;
print "sort: @sorted\n";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfiltered\n";
saschabeaumont
  • 22,080
  • 4
  • 63
  • 85
0

Using concept of unique hash keys :

my @array  = ("a","b","c","b","a","d","c","a","d");
my %hash   = map { $_ => 1 } @array;
my @unique = keys %hash;
print "@unique","\n";

Output: a c b d

Sandeep_black
  • 1,352
  • 17
  • 18