Anyone have a quick method for de-duplicating a generic List in C#?
-
5Do you care about the order of elements in the result? This will exclude some solutions. – Colonel Panic Dec 01 '17 at 09:25
-
3A one line solution: `ICollection
withoutDuplicates = new HashSet – Harald Coppoolse Mar 13 '19 at 12:44(inputList);` -
where would this method be used?? – kimiahdri Sep 11 '22 at 17:57
32 Answers
If you're using .Net 3+, you can use Linq.
List<T> withDupes = LoadSomeData();
List<T> noDupes = withDupes.Distinct().ToList();

- 26,279
- 16
- 79
- 95
-
-
25No, it works with lists containing objects of any type. But you will have to override the default comparer for your type. Like so: public override bool Equals(object obj){...} – BaBu Dec 09 '10 at 14:27
-
1It's always a good idea to override ToString() and GetHashCode() with your classes so that this kind of thing will work. – B Seven Apr 08 '11 at 16:58
-
3You can also use the MoreLinQ Nuget package which has a .DistinctBy() extension method. Pretty useful. – yu_ominae May 16 '13 at 02:49
-
1Distinct isn't guaranteed to preserve order, implementation dependent – Tod Cunningham May 29 '20 at 17:15
-
Mine required List
noDupes = withDupes.Distinct(new MyType()).ToList(); in order to work. – John Kurtz Jan 22 '21 at 12:07 -
This is not a full solution, and will not work for complex objects. For a solution that works with objects of any type, see my answer https://stackoverflow.com/questions/47752/remove-duplicates-from-a-listt-in-c-sharp/70162977#70162977 – Onat Korucu Nov 30 '21 at 00:46
Perhaps you should consider using a HashSet.
From the MSDN link:
using System;
using System.Collections.Generic;
class Program
{
static void Main()
{
HashSet<int> evenNumbers = new HashSet<int>();
HashSet<int> oddNumbers = new HashSet<int>();
for (int i = 0; i < 5; i++)
{
// Populate numbers with just even numbers.
evenNumbers.Add(i * 2);
// Populate oddNumbers with just odd numbers.
oddNumbers.Add((i * 2) + 1);
}
Console.Write("evenNumbers contains {0} elements: ", evenNumbers.Count);
DisplaySet(evenNumbers);
Console.Write("oddNumbers contains {0} elements: ", oddNumbers.Count);
DisplaySet(oddNumbers);
// Create a new HashSet populated with even numbers.
HashSet<int> numbers = new HashSet<int>(evenNumbers);
Console.WriteLine("numbers UnionWith oddNumbers...");
numbers.UnionWith(oddNumbers);
Console.Write("numbers contains {0} elements: ", numbers.Count);
DisplaySet(numbers);
}
private static void DisplaySet(HashSet<int> set)
{
Console.Write("{");
foreach (int i in set)
{
Console.Write(" {0}", i);
}
Console.WriteLine(" }");
}
}
/* This example produces output similar to the following:
* evenNumbers contains 5 elements: { 0 2 4 6 8 }
* oddNumbers contains 5 elements: { 1 3 5 7 9 }
* numbers UnionWith oddNumbers...
* numbers contains 10 elements: { 0 2 4 6 8 1 3 5 7 9 }
*/

- 8,539
- 4
- 63
- 74

- 192,085
- 135
- 376
- 510
-
14its unbelievable fast... 100.000 strings with List takes 400s and 8MB ram, my own solution takes 2.5s and 28MB, hashset takes 0.1s!!! and 11MB ram – sasjaq Mar 25 '13 at 22:28
-
3`HashSet` [doesn't have an index](http://stackoverflow.com/questions/3828973/select-element-index-from-hashset-c-sharp/3828992#3828992) , therefore it's not always possible to use it. I have to create once a huge list without duplicates and then use it for `ListView` in the virtual mode. It was super-fast to make a `HashSet<>` first and then convert it into a `List<>` (so `ListView` can access items by index). `List<>.Contains()` is too slow. – Sinatr Jul 31 '13 at 08:50
-
66Would help if there were an example of how to use a hashset in this particular context. – Nathan McKaskle Jan 28 '15 at 17:04
-
26
-
3HashSet is great in most circumstances. But if you have an object like DateTime, it compares by reference and not by value, so you will still end up with duplicates. – Jason McKindly Dec 09 '15 at 20:03
-
In case someone else would like to use a HashSet
inside a T4 template, you have to add an explicit reference to the System.Core assembly: <#@ assembly name="System.Core" #> <#@ import namespace="System.Collections.Generic" #> (see http://stackoverflow.com/questions/247005/how-can-i-use-linq-in-a-t4-template) – Cesar Jan 09 '16 at 09:31 -
@sasjaq why don't you post your solution then if it's so good and fast? – barlop Sep 05 '16 at 14:58
-
@barlop my solution is 150x faster than List itself, but 25x slower then hashset... btw, my solution is to split each string to 2letter keys and make a tree structure, where you can compare a path for each 2letter strip of searched term. – sasjaq Sep 06 '16 at 09:30
-
note that linq's Distinct uses `Set
` https://referencesource.microsoft.com/#System.Core/System/Linq/Enumerable.cs,4ab583c7d8e84d6d – pm100 Feb 09 '17 at 16:31 -
@JasonMcKindly: why you think that? It just needs to provide a meaningful `Equals`+`GetHashCode`(true for `DateTime`). Even if there is none in the type you can use the [`HashSet`-constructor](https://msdn.microsoft.com/en-us/library/bb503873(v=vs.110).aspx) that takes an `IEqualityComparer
` as argument. – Tim Schmelter Feb 09 '17 at 16:32 -
2HashSet is not good when you have very few items (300 instead of 10.000). It is WAY faster using a List, or even better, an array, add the items you want, then sort just before you need it, then remove duplicates. Even doing all that by hand is faster than using a HashSet for a very small amount of items. – Darkgaze May 28 '19 at 14:19
How about:
var noDupes = list.Distinct().ToList();
In .net 3.5?
-
-
2@darkgaze this just creates another list with only unique entries. So any duplicates will be removed and you're left with a list where every position has a different object. – hexagod Oct 04 '19 at 17:33
-
Does this work for list of list of list items where the item codes are duplicate and needs to get unique list – venkat Jan 19 '20 at 21:10
Simply initialize a HashSet with a List of the same type:
var noDupes = new HashSet<T>(withDupes);
Or, if you want a List returned:
var noDupsList = new HashSet<T>(withDupes).ToList();

- 44,393
- 43
- 115
- 119
-
3... and if you need a `List
` as result use `new HashSet – Tim Schmelter Feb 09 '17 at 16:29(withDupes).ToList()`
Sort it, then check two and two next to each others, as the duplicates will clump together.
Something like this:
list.Sort();
Int32 index = list.Count - 1;
while (index > 0)
{
if (list[index] == list[index - 1])
{
if (index < list.Count - 1)
(list[index], list[list.Count - 1]) = (list[list.Count - 1], list[index]);
list.RemoveAt(list.Count - 1);
index--;
}
else
index--;
}
Notes:
- Comparison is done from back to front, to avoid having to resort list after each removal
- This example now uses C# Value Tuples to do the swapping, substitute with appropriate code if you can't use that
- The end-result is no longer sorted

- 380,855
- 102
- 628
- 825
-
1If I am not mistaken, most of the approaches mentioned above are just abstractions of this very routines, right? I would have taken your approach here, Lasse, because its how I mentally picture moving through data. But, now I am interested in performance differences between some of the suggestions. – Ian Patrick Hughes Aug 11 '09 at 20:52
-
7Implement them and time them, only way to be sure. Even Big-O notation won't help you with actual performance metrics, only a growth effect relationship. – Lasse V. Karlsen Aug 12 '09 at 07:03
-
1
-
12Don't do that. It's super slow. `RemoveAt` is a very costly operation on a `List` – Clément Feb 09 '13 at 21:53
-
1Clément is right. A way to salvage this would be to wrap this in a method that yields with an enumerator and only return distinct values. Alternatively you could copy values to a new array or list. – JHubbard80 Oct 25 '13 at 17:08
-
this seems related, discusses issue of Remove being slow on a List http://stackoverflow.com/questions/6926554/how-to-quickly-remove-items-from-a-list – barlop Sep 05 '16 at 15:10
-
Instead of using RemoveAt you should SWAP with the last value on the list. Then remove the last items (reducing the count). Then Sort again. – Darkgaze May 28 '19 at 14:21
-
@darkgaze I took the liberty of improving on your suggestion, by running the comparison loop from back to front, I could avoid the resort in the loop. – Lasse V. Karlsen May 29 '19 at 06:22
-
However, there are far better alternatives in C#, like the Distinct answer up above, so I don't think anyone would use this code anyway. – Lasse V. Karlsen May 29 '19 at 06:24
-
@LasseVågsætherKarlsen Thanks. Where did you modify it? I wrote an answer to this thread too. https://stackoverflow.com/a/56344965/772739 – Darkgaze May 30 '19 at 11:23
-
`new` doesn't really have any reliable complexity guarantees, so using a temp HashSet or a sort then copy deduped to a new list can't really be guaranteed to be faster than anything else. Deduplication after a sort can be done in-place without any additional storage (beyond that used by the Sort itself). This doesn't matter that much if you're writing a script or a low-scale webservice or something. It matters a lot if you're making a videogame. I would take care to determine my needs before throwing linq at it in such a case (unless it just does a sort + inplace dedupe internally). – Merlyn Morgan-Graham Apr 01 '22 at 02:56
I like to use this command:
List<Store> myStoreList = Service.GetStoreListbyProvince(provinceId)
.GroupBy(s => s.City)
.Select(grp => grp.FirstOrDefault())
.OrderBy(s => s.City)
.ToList();
I have these fields in my list: Id, StoreName, City, PostalCode I wanted to show list of cities in a dropdown which has duplicate values. solution: Group by city then pick the first one for the list.
-
1This worked for a case where I had multiple items with the same key, and had to keep only the one with the most recent update date. So the approach using "distinct" wouldn't work. – Paul Evans Oct 27 '20 at 04:01
It worked for me. simply use
List<Type> liIDs = liIDs.Distinct().ToList<Type>();
Replace "Type" with your desired type e.g. int.

- 486
- 7
- 15
-
1Distinct is in Linq, not System.Collections.Generic as reported by the MSDN page. – Almo Oct 01 '14 at 19:54
-
6This answer (2012) seems to be the same as two other answers on this page that are from 2008? – Jon Schneider Jan 06 '16 at 21:33
As kronoz said in .Net 3.5 you can use Distinct()
.
In .Net 2 you could mimic it:
public IEnumerable<T> DedupCollection<T> (IEnumerable<T> input)
{
var passedValues = new HashSet<T>();
// Relatively simple dupe check alg used as example
foreach(T item in input)
if(passedValues.Add(item)) // True if item is new
yield return item;
}
This could be used to dedupe any collection and will return the values in the original order.
It's normally much quicker to filter a collection (as both Distinct()
and this sample does) than it would be to remove items from it.

- 150,284
- 78
- 298
- 434
-
The problem with this approach though is that it's O(N^2)-ish, as opposed to a hashset. But at least it's evident what it is doing. – Tamas Czinege Jan 29 '09 at 18:25
-
1@DrJokepu - actually I didn't realise that the `HashSet` constructor deduped, which makes it better for most circumstances. However, this would preserve the sort order, which a `HashSet` doesn't. – Keith Aug 24 '10 at 14:59
-
1
-
1@thorn really? So hard to keep track. In that case you could just use a `Dictionary
` instead, replace `.Contains` with `.ContainsKey` and `.Add(item)` with `.Add(item, null)` – Keith Nov 06 '11 at 22:32 -
@Keith, as per my testing `HashSet` preserves order while `Distinct()` doesn't. – Dennis T --Reinstate Monica-- Jun 09 '15 at 15:50
-
@DennisT `HashSet` sometimes does, depending on the type of key used and the relative order of the input. The `DedupCollection` snippet will return results in the same order as they go in. – Keith Jun 09 '15 at 17:21
-
@Keith, ok `HashSet` seems to preserve order at least for `int`s. Do u know what else it does and doesn't work for? I'd test myself but no time at the moment. – Dennis T --Reinstate Monica-- Jun 11 '15 at 15:57
An extension method might be a decent way to go... something like this:
public static List<T> Deduplicate<T>(this List<T> listToDeduplicate)
{
return listToDeduplicate.Distinct().ToList();
}
And then call like this, for example:
List<int> myFilteredList = unfilteredList.Deduplicate();

- 6,857
- 13
- 45
- 76

- 131
- 1
- 2
In Java (I assume C# is more or less identical):
list = new ArrayList<T>(new HashSet<T>(list))
If you really wanted to mutate the original list:
List<T> noDupes = new ArrayList<T>(new HashSet<T>(list));
list.clear();
list.addAll(noDupes);
To preserve order, simply replace HashSet with LinkedHashSet.

- 145,806
- 30
- 211
- 305
-
5in C# it would be: List
noDupes = new List – smohamed Apr 16 '12 at 14:45(new HashSet (list)); list.Clear(); list.AddRange(noDupes); -
In C#, its easier this way: `var noDupes = new HashSet
(list); list.Clear(); list.AddRange(noDupes);` :) – nawfal May 26 '14 at 11:20
This takes distinct (the elements without duplicating elements) and convert it into a list again:
List<type> myNoneDuplicateValue = listValueWithDuplicate.Distinct().ToList();

- 1,916
- 6
- 27
- 36

- 141
- 1
- 4
Use Linq's Union method.
Note: This solution requires no knowledge of Linq, aside from that it exists.
Code
Begin by adding the following to the top of your class file:
using System.Linq;
Now, you can use the following to remove duplicates from an object called, obj1
:
obj1 = obj1.Union(obj1).ToList();
Note: Rename obj1
to the name of your object.
How it works
The Union command lists one of each entry of two source objects. Since obj1 is both source objects, this reduces obj1 to one of each entry.
The
ToList()
returns a new List. This is necessary, because Linq commands likeUnion
returns the result as an IEnumerable result instead of modifying the original List or returning a new List.

- 8,539
- 4
- 63
- 74
As a helper method (without Linq):
public static List<T> Distinct<T>(this List<T> list)
{
return (new HashSet<T>(list)).ToList();
}

- 1,074
- 1
- 10
- 7
-
I think Distinct is already taken. Apart from that (if you rename method) it should work. – Andreas Reiff Jan 05 '15 at 12:34
Installing the MoreLINQ package via Nuget, you can easily distinct object list by a property
IEnumerable<Catalogue> distinctCatalogues = catalogues.DistinctBy(c => c.CatalogueCode);

- 8,539
- 4
- 63
- 74

- 1,918
- 1
- 27
- 34
Here's an extension method for removing adjacent duplicates in-situ. Call Sort() first and pass in the same IComparer. This should be more efficient than Lasse V. Karlsen's version which calls RemoveAt repeatedly (resulting in multiple block memory moves).
public static void RemoveAdjacentDuplicates<T>(this List<T> List, IComparer<T> Comparer)
{
int NumUnique = 0;
for (int i = 0; i < List.Count; i++)
if ((i == 0) || (Comparer.Compare(List[NumUnique - 1], List[i]) != 0))
List[NumUnique++] = List[i];
List.RemoveRange(NumUnique, List.Count - NumUnique);
}

- 511
- 6
- 11
If you don't care about the order you can just shove the items into a HashSet
, if you do want to maintain the order you can do something like this:
var unique = new List<T>();
var hs = new HashSet<T>();
foreach (T t in list)
if (hs.Add(t))
unique.Add(t);
Or the Linq way:
var hs = new HashSet<T>();
list.All( x => hs.Add(x) );
Edit: The HashSet
method is O(N)
time and O(N)
space while sorting and then making unique (as suggested by @lassevk and others) is O(N*lgN)
time and O(1)
space so it's not so clear to me (as it was at first glance) that the sorting way is inferior
If you have tow classes Product
and Customer
and we want to remove duplicate items from their list
public class Product
{
public int Id { get; set; }
public string ProductName { get; set; }
}
public class Customer
{
public int Id { get; set; }
public string CustomerName { get; set; }
}
You must define a generic class in the form below
public class ItemEqualityComparer<T> : IEqualityComparer<T> where T : class
{
private readonly PropertyInfo _propertyInfo;
public ItemEqualityComparer(string keyItem)
{
_propertyInfo = typeof(T).GetProperty(keyItem, BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Public);
}
public bool Equals(T x, T y)
{
var xValue = _propertyInfo?.GetValue(x, null);
var yValue = _propertyInfo?.GetValue(y, null);
return xValue != null && yValue != null && xValue.Equals(yValue);
}
public int GetHashCode(T obj)
{
var propertyValue = _propertyInfo.GetValue(obj, null);
return propertyValue == null ? 0 : propertyValue.GetHashCode();
}
}
then, You can remove duplicate items in your list.
var products = new List<Product>
{
new Product{ProductName = "product 1" ,Id = 1,},
new Product{ProductName = "product 2" ,Id = 2,},
new Product{ProductName = "product 2" ,Id = 4,},
new Product{ProductName = "product 2" ,Id = 4,},
};
var productList = products.Distinct(new ItemEqualityComparer<Product>(nameof(Product.Id))).ToList();
var customers = new List<Customer>
{
new Customer{CustomerName = "Customer 1" ,Id = 5,},
new Customer{CustomerName = "Customer 2" ,Id = 5,},
new Customer{CustomerName = "Customer 2" ,Id = 5,},
new Customer{CustomerName = "Customer 2" ,Id = 5,},
};
var customerList = customers.Distinct(new ItemEqualityComparer<Customer>(nameof(Customer.Id))).ToList();
this code remove duplicate items by Id
if you want remove duplicate items by other property, you can change nameof(YourClass.DuplicateProperty)
same nameof(Customer.CustomerName)
then remove duplicate items by CustomerName
Property.

- 3,884
- 1
- 29
- 34
Might be easier to simply make sure that duplicates are not added to the list.
if(items.IndexOf(new_item) < 0)
items.add(new_item)
-
1I'm currently doing it like this but the more entries you have the longer the check for duplicates takes. – Robert Strauch Jun 24 '13 at 14:59
-
I have the same problem here. I'm using the `List
.Contains` method each time but with more than 1,000,000 entries. This process slows down my application. I'm using a `List – RPDeshaies Jan 03 '14 at 19:05.Distinct().ToList ()` first instead. -
-
7Explanation *why* it would work would definitely make this answer better – Igor B Aug 06 '17 at 15:26
A simple intuitive implementation:
public static List<PointF> RemoveDuplicates(List<PointF> listPoints)
{
List<PointF> result = new List<PointF>();
for (int i = 0; i < listPoints.Count; i++)
{
if (!result.Contains(listPoints[i]))
result.Add(listPoints[i]);
}
return result;
}

- 2,756
- 7
- 29
- 35

- 179
- 1
- 6
David J.'s answer is a good method, no need for extra objects, sorting, etc. It can be improved on however:
for (int innerIndex = items.Count - 1; innerIndex > outerIndex ; innerIndex--)
So the outer loop goes top bottom for the entire list, but the inner loop goes bottom "until the outer loop position is reached".
The outer loop makes sure the entire list is processed, the inner loop finds the actual duplicates, those can only happen in the part that the outer loop hasn't processed yet.
Or if you don't want to do bottom up for the inner loop you could have the inner loop start at outerIndex + 1.

- 21
- 1
Another way in .Net 2.0
static void Main(string[] args)
{
List<string> alpha = new List<string>();
for(char a = 'a'; a <= 'd'; a++)
{
alpha.Add(a.ToString());
alpha.Add(a.ToString());
}
Console.WriteLine("Data :");
alpha.ForEach(delegate(string t) { Console.WriteLine(t); });
alpha.ForEach(delegate (string v)
{
if (alpha.FindAll(delegate(string t) { return t == v; }).Count > 1)
alpha.Remove(v);
});
Console.WriteLine("Unique Result :");
alpha.ForEach(delegate(string t) { Console.WriteLine(t);});
Console.ReadKey();
}

- 29
- 1
There are many ways to solve - the duplicates issue in the List, below is one of them:
List<Container> containerList = LoadContainer();//Assume it has duplicates
List<Container> filteredList = new List<Container>();
foreach (var container in containerList)
{
Container duplicateContainer = containerList.Find(delegate(Container checkContainer)
{ return (checkContainer.UniqueId == container.UniqueId); });
//Assume 'UniqueId' is the property of the Container class on which u r making a search
if(!containerList.Contains(duplicateContainer) //Add object when not found in the new class object
{
filteredList.Add(container);
}
}
Cheers Ravi Ganesan

- 21
- 1
All answers copy lists, or create a new list, or use slow functions, or are just painfully slow.
To my understanding, this is the fastest and cheapest method I know (also, backed by a very experienced programmer specialized on real-time physics optimization).
// Duplicates will be noticed after a sort O(nLogn)
list.Sort();
// Store the current and last items. Current item declaration is not really needed, and probably optimized by the compiler, but in case it's not...
int lastItem = -1;
int currItem = -1;
int size = list.Count;
// Store the index pointing to the last item we want to keep in the list
int last = size - 1;
// Travel the items from last to first O(n)
for (int i = last; i >= 0; --i)
{
currItem = list[i];
// If this item was the same as the previous one, we don't want it
if (currItem == lastItem)
{
// Overwrite last in current place. It is a swap but we don't need the last
list[i] = list[last];
// Reduce the last index, we don't want that one anymore
last--;
}
// A new item, we store it and continue
else
lastItem = currItem;
}
// We now have an unsorted list with the duplicates at the end.
// Remove the last items just once
list.RemoveRange(last + 1, size - last - 1);
// Sort again O(n logn)
list.Sort();
Final cost is:
nlogn + n + nlogn = n + 2nlogn = O(nlogn) which is pretty nice.
Note about RemoveRange: Since we cannot set the count of the list and avoid using the Remove funcions, I don't know exactly the speed of this operation but I guess it is the fastest way.

- 2,280
- 6
- 36
- 59
Using HashSet this can be done easily.
List<int> listWithDuplicates = new List<int> { 1, 2, 1, 2, 3, 4, 5 };
HashSet<int> hashWithoutDuplicates = new HashSet<int> ( listWithDuplicates );
List<int> listWithoutDuplicates = hashWithoutDuplicates.ToList();

- 849
- 5
- 14
- 27
Here's a simple solution that doesn't require any hard-to-read LINQ or any prior sorting of the list.
private static void CheckForDuplicateItems(List<string> items)
{
if (items == null ||
items.Count == 0)
return;
for (int outerIndex = 0; outerIndex < items.Count; outerIndex++)
{
for (int innerIndex = 0; innerIndex < items.Count; innerIndex++)
{
if (innerIndex == outerIndex) continue;
if (items[outerIndex].Equals(items[innerIndex]))
{
// Duplicate Found
}
}
}
}

- 31
- 1
-
You have more control on duplicated items with this method. Even more if you have a database to update. For the innerIndex, why no starting from outerIndex+1 instead starting from beginning every time ? – Nolmë Informatique Apr 22 '17 at 10:16
public static void RemoveDuplicates<T>(IList<T> list )
{
if (list == null)
{
return;
}
int i = 1;
while(i<list.Count)
{
int j = 0;
bool remove = false;
while (j < i && !remove)
{
if (list[i].Equals(list[j]))
{
remove = true;
}
j++;
}
if (remove)
{
list.RemoveAt(i);
}
else
{
i++;
}
}
}

- 1,181
- 1
- 10
- 29
If you need to compare complex objects, you will need to pass a Comparer object inside the Distinct() method.
private void GetDistinctItemList(List<MyListItem> _listWithDuplicates)
{
//It might be a good idea to create MyListItemComparer
//elsewhere and cache it for performance.
List<MyListItem> _listWithoutDuplicates = _listWithDuplicates.Distinct(new MyListItemComparer()).ToList();
//Choose the line below instead, if you have a situation where there is a chance to change the list while Distinct() is running.
//ToArray() is used to solve "Collection was modified; enumeration operation may not execute" error.
//List<MyListItem> _listWithoutDuplicates = _listWithDuplicates.ToArray().Distinct(new MyListItemComparer()).ToList();
return _listWithoutDuplicates;
}
Assuming you have 2 other classes like:
public class MyListItemComparer : IEqualityComparer<MyListItem>
{
public bool Equals(MyListItem x, MyListItem y)
{
return x != null
&& y != null
&& x.A == y.A
&& x.B.Equals(y.B);
&& x.C.ToString().Equals(y.C.ToString());
}
public int GetHashCode(MyListItem codeh)
{
return codeh.GetHashCode();
}
}
And:
public class MyListItem
{
public int A { get; }
public string B { get; }
public MyEnum C { get; }
public MyListItem(int a, string b, MyEnum c)
{
A = a;
B = b;
C = c;
}
}

- 992
- 11
- 13
I think the simplest way is:
Create a new list and add unique item.
Example:
class MyList{
int id;
string date;
string email;
}
List<MyList> ml = new Mylist();
ml.Add(new MyList(){
id = 1;
date = "2020/09/06";
email = "zarezadeh@gmailcom"
});
ml.Add(new MyList(){
id = 2;
date = "2020/09/01";
email = "zarezadeh@gmailcom"
});
List<MyList> New_ml = new Mylist();
foreach (var item in ml)
{
if (New_ml.Where(w => w.email == item.email).SingleOrDefault() == null)
{
New_ml.Add(new MyList()
{
id = item.id,
date = item.date,
email = item.email
});
}
}

- 14,524
- 7
- 33
- 80

- 31
- 8
As per remove duplicate, We have to apply below logic so It will remove duplicate in fast ways.
public class Program
{
public static void Main(string[] arges)
{
List<string> cities = new List<string>() { "Chennai", "Kolkata", "Mumbai", "Mumbai","Chennai", "Delhi", "Delhi", "Delhi", "Chennai", "Kolkata", "Mumbai", "Chennai" };
cities = RemoveDuplicate(cities);
foreach (var city in cities)
{
Console.WriteLine(city);
}
}
public static List<string> RemoveDuplicate(List<string> cities)
{
if (cities.Count < 2)
{
return cities;
}
int size = cities.Count;
for (int i = 0; i < size; i++)
{
for (int j = i+1; j < size; j++)
{
if (cities[i] == cities[j])
{
cities.RemoveAt(j);
size--;
j--;
}
}
}
return cities;
}
}

- 3,425
- 30
- 38
- 48
I have my own way. I am 2 looping same list for compare list items. And then remove second one.
for(int i1 = 0; i1 < lastValues.Count; i1++)
{
for(int i2 = 0; i2 < lastValues.Count; i2++)
{
if(lastValues[i1].UserId == lastValues[i2].UserId)
{
lastValues.RemoveAt(i2);
}
}
}

- 1,656
- 15
- 16