Just a sample exploration of duplicate removal from the list of string. Old approach outperforms in terms of execution the new lambda/stream approach. What is the design thought behind the new approach and does it comes with other benefit compare to being performant?
List<String> nameList = new ArrayList<>();
Collections.addAll(nameList, "Raj","Nil",.......);
removeDupViaSet(nameList);
removeDupViaStream(nameList);
private static void removeDupViaStream(List<String> nameList) {
long start = System.nanoTime();
List<String> nm = nameList.stream().distinct().collect(Collectors.toList());
long end = System.nanoTime() - start;
System.out.println("Dup Removed via Stream : " + end);
}
private static void removeDupViaSet(List<String> nameList) {
long start = System.nanoTime();
Set<String> tempHashSet = new HashSet<>();
tempHashSet.addAll(nameList);
nameList.clear();
nameList.addAll(tempHashSet);
long end = System.nanoTime() - start;
System.out.println("Dup Removed via Set : " + end);
}
Dup Removed via Set : 1186909
Dup Removed via Stream : 67513136