3

Possible Duplicate:
How to make elements of vector unique? (remove non adjacent duplicates)

Is there any standard algorithm which is provided as part of STL algorithms which can remove duplicates from an array while preserving the order. For example, if I have an array like int a[] = {2,1,3,1,4,2}; after the removal of duplicates it should be a[] = {2,1,3,4};. I can not use std::unique as the array is not sorted. Other solutions like inserting it into an std::set I lose the order as the elements will get sorted. Is there any other combination of algorithms I can use or do I have to code my own?

Community
  • 1
  • 1
Naveen
  • 74,600
  • 47
  • 176
  • 233
  • This will be algorithmically inefficient, you know. You could do it in O(N^2) with O(1) memory, or I think in O(N * log N) with O(N) memory. – GManNickG Aug 30 '10 at 08:27
  • @GMan: You could do it in O(n) if you did a simultaneous bucket sort O(2^32) buckets. – Martin York Aug 30 '10 at 08:33

3 Answers3

9

There is no standard algorithm for this, but it's fairly easy to implement. The principle is to keep a std::set of the items you've seen so far, and skip duplicates while copying to a new vector or array. This operates in O(n lg n) time and O(n) memory. If you're using C++0x, you can get it down to O(n) time by using std::unordered_set for the seen-items set; this uses a hash table instead of binary trees and should be faster.

bdonlan
  • 224,562
  • 31
  • 268
  • 324
  • 1
    Hashes never had `O(1)` performance. And never will. With one exception: hash has `O(1)` lookup/insert time only in the dreams of deluded R&Ds sitting behind thick walls, well protected from the customer feedback. – Dummy00001 Aug 30 '10 at 11:34
  • I think you somewhat misunderstand what `O(1)` means. It doesn't mean fast, it means constant whatever the variables we are talking about (here the size of the container). Note that instead of using a Hash Table, if you have a great number of items, you might prefer a Bloom Filter. – Matthieu M. Aug 30 '10 at 13:57
  • A bloom filter has a chance of false positives. It only really helps when a) positives are rare and b) the cost of a lookup against the true backing store is expensive (or false positives are acceptable). If the true backing set is kept in memory, it's not a win. – bdonlan Aug 30 '10 at 16:13
3

Since the problem is relatively "complex", I wouldn't try to force a solution by using standard algorithms only (since there is no special algorithm to solve your problem. You could probably hack something with remove_if, find and bind2nd or something). For implementing the algorithm yourself, you have basically two options, with the usual memory vs. speed tradeoff. The first solution would be to iterate the vector and search and remove duplicates for the current item. This is the cpu-intensive approach. A maybe faster approach would be creating a second vector (of the same size as the first to minimize memory reallocations) and storing the found items in there. Then, for each iteration of the original vector, only the shorter second vector needs to be searched through to find out whether the current item should be deleted or not. The first approach works with every iterator, while the second is limited to random access iterators. Here are the implementations:

#include <iostream>
#include <algorithm>
#include <vector>

using namespace std;

template<typename T>
void remove_duplicates_ordered_mem_intensive(T &container)
{
   std::vector<typename T::value_type> items;
   items.reserve (container.size());

   typename T::iterator i = container.begin();
   while (i != container.end())
   {
      if (find (items.begin(), items.end(), *i) != items.end())
         i = container.erase(i);
      else
      {
         items.push_back(*i);
         ++i;
      }
   }
} 

template<typename T>
void remove_duplicates_ordered_slow(T &container)
{
   typename T::iterator i = container.begin();
   while (i != container.end())
   {
      typename T::iterator f = i;
      ++f;
      while (f != container.end())
      {
         if (*f == *i)
            f = container.erase(f);
         else
            ++f;
      }
      ++i;
   }
} 

int main ()
{
   vector<int> v;
   v.push_back (2);
   v.push_back (1);
   v.push_back (3);
   v.push_back (1);
   v.push_back (4);
   v.push_back (2); 

   cout << "Old:\n";
   for (vector<int>::const_iterator i = v.begin(); i != v.end(); ++i)
      cout << *i << endl;


   vector<int> a (v), b (v);
   remove_duplicates_ordered_mem_intensive (a);
   remove_duplicates_ordered_slow (b); 

   cout << "\nRemoved duplicates with intensive memory usage:\n";
   for (vector<int>::const_iterator i = a.begin(); i != a.end(); ++i)
      cout << *i << endl; 

   cout << "\nRemoved duplicates somewhat slower, without copying:\n";
   for (vector<int>::const_iterator i = b.begin(); i != b.end(); ++i)
      cout << *i << endl;
}
Daniel Albuschat
  • 806
  • 8
  • 20
  • I'd rename those `unique_unordered` and `unique_unordered_inplace`. :) – GManNickG Aug 30 '10 at 08:36
  • If you're going to maintain a vector of items seen so far anyway, might as well use `std::set` or (in C++0x) `std::unordered_set`. Also, note that `erase` is slow (O(n), making the overall algorithm O(n^2)) for `std::vector` and `std::deque`; you could instead copy just one element at a time by maintaining two pointers into the vector (one to the item being read, one to the next empty slot to write), then truncate the vector at the end to the new item count. Or just copy to a new vector. – bdonlan Aug 30 '10 at 08:50
  • Ack. Also see the Efficient STL book from Scott Meyers, which elaborates on this topic. There's still potential left for optimization, I just wanted to point out the basic idea. – Daniel Albuschat Aug 30 '10 at 12:38
1

remove duplicates from an array

This is technically impossible, because arrays cannot change size.

fredoverflow
  • 256,549
  • 94
  • 388
  • 662