You cannot avoid duplicates in List. No way - there is no verification of items.
If you don't bother with order of items - use HashSet.
If you want to preserve order of items (actually there is a little ambiguity - should item appear at index of first addition or at index of last addition). But you want to be sure that all items are unique, then you should write your own List class. I.e. something which implements IList<T>
interface:
public class ListWithoutDuplicates<T> : IList<T>
And you have different options here. E.g. you should decide what is more important for you - fast addition or memory consumption. Because for fast addition and contains operation you should use some hash-based data structure. Which is unordered. Here is sample implementation with HashSet for storing hashes of all items stored in the internal list. You will need following fields:
private readonly HashSet<int> hashes = new HashSet<int>();
private readonly List<T> items = new List<T>();
private static readonly Comparer<T> comparer = Comparer<T>.Default;
Adding items is simple (warning: no null-checks here and further) - use item hash code to quickly O(1) check if it's already added. Use same approach for removing items:
public void Add(T item)
{
var hash = item.GetHashCode();
if (hashes.Contains(hash))
return;
hashes.Add(hash);
items.Add(item);
}
public bool Remove(T item)
{
var hash = item.GetHashCode();
if (!hashes.Contains(hash))
return false;
hashes.Remove(item.GetHashCode());
return items.Remove(item);
}
Some index-based operations:
public int IndexOf(T item)
{
var hash = item.GetHashCode();
if (!hashes.Contains(hash))
return -1;
return items.IndexOf(item);
}
public void Insert(int index, T item)
{
var itemAtIndex = items[index];
if (comparer.Compare(item, itemAtIndex) == 0)
return;
var hash = item.GetHashCode();
if (!hashes.Contains(hash))
{
hashes.Remove(itemAtIndex.GetHashCode());
items[index] = item;
hashes.Add(hash);
return;
}
throw new ArgumentException("Cannot add duplicate item");
}
public void RemoveAt(int index)
{
var item = items[index];
hashes.Remove(item.GetHashCode());
items.RemoveAt(index);
}
And left-overs:
public T this[int index]
{
get { return items[index]; }
set { Insert(index, value); }
}
public int Count => items.Count;
public bool Contains(T item) => hashes.Contains(item.GetHashCode());
public IEnumerator<T> GetEnumerator() => items.GetEnumerator();
IEnumerator IEnumerable.GetEnumerator() => items.GetEnumerator();
That's it. Now you have list implementation which will add item only once (first time). E.g.
var list = new ListWithoutDuplicates<int> { 1, 2, 1, 3, 5, 2, 5, 3, 4 };
Will create list with items 1, 2, 3, 5, 4. Note: if memory consumption is more important than performance, then instead of using hashes use items.Contains
operation which is O(n).
BTW What we just did is actually a IList Decorator