205

arr is array of strings:

["hello", "world", "stack", "overflow", "hello", "again"]

What would be an easy and elegant way to check if arr has duplicates, and if so, return one of them (no matter which)?

Examples:

["A", "B", "C", "B", "A"]    # => "A" or "B"
["A", "B", "C"]              # => nil
the Tin Man
  • 158,662
  • 42
  • 215
  • 303
Misha Moroshko
  • 166,356
  • 226
  • 505
  • 746
  • `arr == arr.uniq` would be an easy and elegant way to check if `arr` has duplicates, however, it doesn't provide which were duplicated. – Joel AZEMAR May 03 '20 at 12:56

23 Answers23

292
a = ["A", "B", "C", "B", "A"]
a.detect{ |e| a.count(e) > 1 }

I know this isn't very elegant answer, but I love it. It's beautiful one liner code. And works perfectly fine unless you need to process huge data set.

Looking for faster solution? Here you go!

def find_one_using_hash_map(array)
  map = {}
  dup = nil
  array.each do |v|
    map[v] = (map[v] || 0 ) + 1

    if map[v] > 1
      dup = v
      break
    end
  end

  return dup
end

It's linear, O(n), but now needs to manage multiple lines-of-code, needs test cases, etc.

If you need an even faster solution, maybe try C instead.

And here is the gist comparing different solutions: https://gist.github.com/naveed-ahmad/8f0b926ffccf5fbd206a1cc58ce9743e

the Tin Man
  • 158,662
  • 42
  • 215
  • 303
Naveed
  • 11,057
  • 2
  • 44
  • 63
  • 61
    Except quadratic for something that can be solved in linear time. – jasonmp85 Mar 28 '13 at 07:47
  • 22
    Providing O(n^2) solutions for linear problems is not the way to go. – tdgs May 03 '13 at 09:18
  • 23
    @jasonmp85 - True; however, that's only considering big-O runtime. in practice, unless you're writing this code for some huge scaling data (and if so, you can actually just use C or Python), the provided answer is far more elegant/readable, and isnt' going to run that much slower compared to a linear time solution. furthermore, in theory, the linear time solution requires linear space, which may not be available – David T. May 25 '13 at 06:10
  • 1
    Well this one does not work for a scenario like this a = ["A", "X", "X", "D", "C", "B", "A"] . How do we get the duplicate values? – not 0x12 Jul 12 '13 at 12:21
  • 32
    @Kalanamith you can get duplicated values using this `a.select {|e| a.count(e) > 1}.uniq` – Naveed Jul 12 '13 at 16:34
  • 3
    Why on earth is there no method like ar.dups? – mjnissim Oct 21 '13 at 15:33
  • 5
    @jasonmp85 where is your linear time answer? – Oktav Nov 27 '13 at 13:43
  • 32
    The problem with the "detect" method is that it stops when it finds the first duplicate, and doesn't give you all the dups. – Jaime Bellmyer Jan 26 '14 at 22:57
  • 8
    @JaimeBellmyer Questions was to get any duplicated value, you need all duplicates? user select. `a.select {|e| a.count(e) > 1}.uniq` – Naveed Jan 08 '15 at 19:08
  • Not linear time, faster algorithm is linear in only on an ordered set, which takes O(n.log(n)) to get. – José Fernandes Jun 26 '15 at 13:15
  • 1
    it is elegant but but watch out, larger the array is slower it gets ! https://gist.github.com/equivalent/3c9a4c9d07fff79062a3 – equivalent8 Jul 13 '15 at 13:57
  • 1
    This is OK since I only need this for inspecting objects in the console. I'm glad I found this answer. – juliangonzalez Feb 10 '16 at 19:06
  • 1
    This should not be the accepted answer because `O(n^2)` complexity. – akuhn Dec 26 '16 at 00:24
  • @jasonmp85 Never prematurely optimise if it will take significant time or code complexity, otherwise always pre-optimise as a matter of good habit. This is an "otherwise". – Adamantish Jan 09 '19 at 17:42
  • it also doesn't return multiple elements having multiple occurrences. – Rajesh Paul Jan 15 '19 at 14:32
  • Oh dear God, what world have we arrived in, where 14 LOC are not allowed to exist without having "test cases *and*" (!) "*stuff*" (-‸ლ). – Sixtyfive Feb 05 '20 at 11:56
  • first suggestion is too slow for my array of 94295 strings, which does not sounds like a lot. 2nd suggestion is not returning all answers. so this is not good. – Mathieu J. Jul 24 '21 at 20:18
  • You need to remove, "t's linear, `O(n)`...", because it's not. As others have pointed out, it's `O(n^2)`. – Cary Swoveland Sep 01 '22 at 17:58
241

You can do this in a few ways, with the first option being the fastest:

ary = ["A", "B", "C", "B", "A"]

ary.group_by{ |e| e }.select { |k, v| v.size > 1 }.map(&:first)

ary.sort.chunk{ |e| e }.select { |e, chunk| chunk.size > 1 }.map(&:first)

And a O(N^2) option (i.e. less efficient):

ary.select{ |e| ary.count(e) > 1 }.uniq
rogerdpack
  • 62,887
  • 36
  • 269
  • 388
Ryan LeCompte
  • 4,281
  • 1
  • 14
  • 14
  • 19
    The first two are much more efficient for large arrays. The last one is O(n*n) so it can get slow. I needed to use this for an array with ~20k elements and the first two returned almost instantly. I had to cancel the third one because it was taking so long. Thanks!! – Venkat D. Feb 04 '13 at 05:15
  • 5
    Just an observation but the first two that end with *.map(&:first)* could just end with *.keys* as that part is just pulling the keys on a hash. – engineerDave Mar 02 '14 at 08:20
  • @engineerDave that depends on the ruby version being used. 1.8.7 would require &:first or even {|k,_| k } without ActiveSupport. – Emirikol Nov 30 '14 at 17:01
  • here are some benchmarks https://gist.github.com/equivalent/3c9a4c9d07fff79062a3 in performance the winner is clearly `group_by.select` – equivalent8 Jul 13 '15 at 13:54
  • 8
    If you're using Ruby > 2.1, you can use: `ary.group_by(&:itself)`. :-) – Drenmi Jan 03 '17 at 13:43
  • Rather than doing `ary.select ...` it's better to do `ary.uniq.select ...`. This will make sure you eliminate duplicate checks... – 15 Volts Feb 01 '21 at 06:06
45

Simply find the first instance where the index of the object (counting from the left) does not equal the index of the object (counting from the right).

arr.detect {|e| arr.rindex(e) != arr.index(e) }

If there are no duplicates, the return value will be nil.

I believe this is the fastest solution posted in the thread so far, as well, since it doesn't rely on the creation of additional objects, and #index and #rindex are implemented in C. The big-O runtime is N^2 and thus slower than Sergio's, but the wall time could be much faster due to the the fact that the "slow" parts run in C.

rogerdpack
  • 62,887
  • 36
  • 269
  • 388
Chris Heald
  • 61,439
  • 10
  • 123
  • 137
  • 5
    I like this solution, but it will only return the first duplicate. To find all duplicates: `arr.find_all {|e| arr.rindex(e) != arr.index(e) }.uniq` – Josh Jan 07 '15 at 04:01
  • 1
    Nor does your answer show how to find if there are any triplicates, or whether one can draw elements from the array to spell "CAT". – Cary Swoveland Jul 11 '15 at 05:09
  • 4
    @bruno077 How is this linear time? – beauby Mar 13 '16 at 23:21
  • 4
    @chris Great answer, but I think you can do a bit better with this: `arr.detect.with_index { |e, idx| idx != arr.rindex(e) }`. Using `with_index` should remove the necessity for the first `index` search. – ki4jnq Sep 15 '16 at 12:57
  • How would you adapt this to a 2D array, comparing duplicates in a column? – ahnbizcad Sep 16 '16 at 08:27
35

detect only finds one duplicate. find_all will find them all:

a = ["A", "B", "C", "B", "A"]
a.find_all { |e| a.count(e) > 1 }
Mischa
  • 42,876
  • 8
  • 99
  • 111
JjP
  • 599
  • 5
  • 5
  • 4
    The question is very specific that only one duplicate is to be returned. Imo, showing how to find all duplicates is fine, but only as an aside to an answer that answers the question asked, which you have not done. btw, it is agonizingly inefficient to invoke `count` for every element in the array. (A counting hash, for example, is much more efficient; e.g, construct `h = {"A"=>2, "B"=>2, "C"=> 1 }` then `h.select { |k,v| v > 1 }.keys #=> ["A", "B"]`. – Cary Swoveland Jul 11 '15 at 05:04
32

Here are two more ways of finding a duplicate.

Use a set

require 'set'

def find_a_dup_using_set(arr)
  s = Set.new
  arr.find { |e| !s.add?(e) }
end

find_a_dup_using_set arr
  #=> "hello" 

Use select in place of find to return an array of all duplicates.

Use Array#difference

class Array
  def difference(other)
    h = other.each_with_object(Hash.new(0)) { |e,h| h[e] += 1 }
    reject { |e| h[e] > 0 && h[e] -= 1 }
  end
end

def find_a_dup_using_difference(arr)
  arr.difference(arr.uniq).first
end

find_a_dup_using_difference arr
  #=> "hello" 

Drop .first to return an array of all duplicates.

Both methods return nil if there are no duplicates.

I proposed that Array#difference be added to the Ruby core. More information is in my answer here.

Benchmark

Let's compare suggested methods. First, we need an array for testing:

CAPS = ('AAA'..'ZZZ').to_a.first(10_000)
def test_array(nelements, ndups)
  arr = CAPS[0, nelements-ndups]
  arr = arr.concat(arr[0,ndups]).shuffle
end

and a method to run the benchmarks for different test arrays:

require 'fruity'

def benchmark(nelements, ndups)
  arr = test_array nelements, ndups
  puts "\n#{ndups} duplicates\n"    
  compare(
    Naveed:    -> {arr.detect{|e| arr.count(e) > 1}},
    Sergio:    -> {(arr.inject(Hash.new(0)) {|h,e| h[e] += 1; h}.find {|k,v| v > 1} ||
                     [nil]).first },
    Ryan:      -> {(arr.group_by{|e| e}.find {|k,v| v.size > 1} ||
                     [nil]).first},
    Chris:     -> {arr.detect {|e| arr.rindex(e) != arr.index(e)} },
    Cary_set:  -> {find_a_dup_using_set(arr)},
    Cary_diff: -> {find_a_dup_using_difference(arr)}
  )
end

I did not include @JjP's answer because only one duplicate is to be returned, and when his/her answer is modified to do that it is the same as @Naveed's earlier answer. Nor did I include @Marin's answer, which, while posted before @Naveed's answer, returned all duplicates rather than just one (a minor point but there's no point evaluating both, as they are identical when return just one duplicate).

I also modified other answers that returned all duplicates to return just the first one found, but that should have essentially no effect on performance, as they computed all duplicates before selecting one.

The results for each benchmark are listed from fastest to slowest:

First suppose the array contains 100 elements:

benchmark(100, 0)
0 duplicates
Running each test 64 times. Test will take about 2 seconds.
Cary_set is similar to Cary_diff
Cary_diff is similar to Ryan
Ryan is similar to Sergio
Sergio is faster than Chris by 4x ± 1.0
Chris is faster than Naveed by 2x ± 1.0

benchmark(100, 1)
1 duplicates
Running each test 128 times. Test will take about 2 seconds.
Cary_set is similar to Cary_diff
Cary_diff is faster than Ryan by 2x ± 1.0
Ryan is similar to Sergio
Sergio is faster than Chris by 2x ± 1.0
Chris is faster than Naveed by 2x ± 1.0

benchmark(100, 10)
10 duplicates
Running each test 1024 times. Test will take about 3 seconds.
Chris is faster than Naveed by 2x ± 1.0
Naveed is faster than Cary_diff by 2x ± 1.0 (results differ: AAC vs AAF)
Cary_diff is similar to Cary_set
Cary_set is faster than Sergio by 3x ± 1.0 (results differ: AAF vs AAC)
Sergio is similar to Ryan

Now consider an array with 10,000 elements:

benchmark(10000, 0)
0 duplicates
Running each test once. Test will take about 4 minutes.
Ryan is similar to Sergio
Sergio is similar to Cary_set
Cary_set is similar to Cary_diff
Cary_diff is faster than Chris by 400x ± 100.0
Chris is faster than Naveed by 3x ± 0.1

benchmark(10000, 1)
1 duplicates
Running each test once. Test will take about 1 second.
Cary_set is similar to Cary_diff
Cary_diff is similar to Sergio
Sergio is similar to Ryan
Ryan is faster than Chris by 2x ± 1.0
Chris is faster than Naveed by 2x ± 1.0

benchmark(10000, 10)
10 duplicates
Running each test once. Test will take about 11 seconds.
Cary_set is similar to Cary_diff
Cary_diff is faster than Sergio by 3x ± 1.0 (results differ: AAE vs AAA)
Sergio is similar to Ryan
Ryan is faster than Chris by 20x ± 10.0
Chris is faster than Naveed by 3x ± 1.0

benchmark(10000, 100)
100 duplicates
Cary_set is similar to Cary_diff
Cary_diff is faster than Sergio by 11x ± 10.0 (results differ: ADG vs ACL)
Sergio is similar to Ryan
Ryan is similar to Chris
Chris is faster than Naveed by 3x ± 1.0

Note that find_a_dup_using_difference(arr) would be much more efficient if Array#difference were implemented in C, which would be the case if it were added to the Ruby core.

Conclusion

Many of the answers are reasonable but using a Set is the clear best choice. It is fastest in the medium-hard cases, joint fastest in the hardest and only in computationally trivial cases - when your choice won't matter anyway - can it be beaten.

The one very special case in which you might pick Chris' solution would be if you want to use the method to separately de-duplicate thousands of small arrays and expect to find a duplicate typically less than 10 items in. This will be a bit faster as it avoids the small additional overhead of creating the Set.

Adamantish
  • 1,888
  • 2
  • 20
  • 23
Cary Swoveland
  • 106,649
  • 6
  • 63
  • 100
  • 1
    Excellent solution. It's not quite as obvious what's going on at first as some of the methods, but it should run in truly linear time, at the expense of a bit of memory. – Chris Heald Jul 11 '15 at 06:51
  • With find_a_dup_using_set, I get the Set back, instead of one of the duplicates. Also I can't find "find.with_object" in Ruby docs anywhere. – ScottJ Oct 06 '16 at 22:44
  • @Scottj, thanks for the catch! It's interesting that no one caught that before now. I fixed it. That's [Enumerable#find](http://ruby-doc.org/core-2.3.0/Enumerable.html#method-i-find) chained to [Enumerator#with_object](http://ruby-doc.org/core-2.3.0/Enumerator.html#method-i-with_object). I'll update the benchmarks, adding your solution and others. – Cary Swoveland Oct 06 '16 at 23:08
  • 1
    Excellent comparison @CarySwoveland – Naveed Dec 27 '16 at 01:33
20

Alas most of the answers are O(n^2).

Here is an O(n) solution,

a = %w{the quick brown fox jumps over the lazy dog}
h = Hash.new(0)
a.find { |each| (h[each] += 1) == 2 } # => 'the"

What is the complexity of this?

  • Runs in O(n) and breaks on first match
  • Uses O(n) memory, but only the minimal amount

Now, depending on how frequent duplicates are in your array these runtimes might actually become even better. For example if the array of size O(n) has been sampled from a population of k << n different elements only the complexity for both runtime and space becomes O(k), however it is more likely that the original poster is validating input and wants to make sure there are no duplicates. In that case both runtime and memory complexity O(n) since we expect the elements to have no repetitions for the majority of inputs.

akuhn
  • 27,477
  • 2
  • 76
  • 91
17

Ruby Array objects have a great method, select.

select {|item| block } → new_ary
select → an_enumerator

The first form is what interests you here. It allows you to select objects which pass a test.

Ruby Array objects have another method, count.

count → int
count(obj) → int
count { |item| block } → int

In this case, you are interested in duplicates (objects which appear more than once in the array). The appropriate test is a.count(obj) > 1.

If a = ["A", "B", "C", "B", "A"], then

a.select{|item| a.count(item) > 1}.uniq
=> ["A", "B"]

You state that you only want one object. So pick one.

Martin Velez
  • 1,379
  • 11
  • 24
  • 1
    I like this one a lot, but you have to throw a uniq on the end or you'll get `["A", "B", "B", "A"]` – Joeyjoejoejr Dec 04 '12 at 20:28
  • 1
    Great answer. This is exactly what I was looking for. As @Joeyjoejoejr pointed out. I have submitted an edit to put `.uniq` on array. – Surya Feb 12 '13 at 08:07
  • 2
    This is hugely inefficient. Not only do you find all duplicates and then throw away all but one, you invoke `count` for each element of the array, which is wasteful and unnecessary. See my comment on JjP's answer. – Cary Swoveland Jul 11 '15 at 05:16
  • Thanks for running the benchmarks. It is useful to see how the different solutions compare in running time. Elegant answers are readable but often not the most efficient. – Martin Velez Jul 12 '15 at 01:29
11

find_all() returns an array containing all elements of enum for which block is not false.

To get duplicate elements

>> arr = ["A", "B", "C", "B", "A"]
>> arr.find_all { |x| arr.count(x) > 1 }

=> ["A", "B", "B", "A"]

Or duplicate uniq elements

>> arr.find_all { |x| arr.count(x) > 1 }.uniq
=> ["A", "B"] 
Rokibul Hasan
  • 4,078
  • 2
  • 19
  • 30
11

Ruby 2.7 introduced Enumerable#tally

And you can use it this way:

ary = ["A", "B", "C", "B", "A", "A"]

ary.tally.select { |_, count| count > 1 }.keys
# => ["A", "B"]
ary = ["A", "B", "C"]

ary.tally.select { |_, count| count > 1 }.keys
# => []

Ruby 2.7 also introduced Enumerable#filter_map, it's possible to combine these methods

ary = ["A", "B", "C", "B", "A", "A"]
ary.tally.filter_map { |el, count| el if count > 1 }
# => ["A", "B"]
mechnicov
  • 12,025
  • 4
  • 33
  • 56
  • 2
    This is what I ended up going to as its the only one thats actually a good answer in 2022. – Paul Danelli Apr 19 '22 at 15:39
  • This is really great. Thanks. I ended up making an initializer that adds a `.duplicates` method onto `Array` so we can just call `["A", "B", "C", "B", "A"].duplicates #=> ["A", "B"]`. – Joshua Pinter Nov 29 '22 at 20:26
10

Something like this will work

arr = ["A", "B", "C", "B", "A"]
arr.inject(Hash.new(0)) { |h,e| h[e] += 1; h }.
    select { |k,v| v > 1 }.
    collect { |x| x.first }

That is, put all values to a hash where key is the element of array and value is number of occurences. Then select all elements which occur more than once. Easy.

Sergio Tulentsev
  • 226,338
  • 43
  • 373
  • 367
7

I know this thread is about Ruby specifically, but I landed here looking for how to do this within the context of Ruby on Rails with ActiveRecord and thought I would share my solution too.

class ActiveRecordClass < ActiveRecord::Base
  #has two columns, a primary key (id) and an email_address (string)
end

ActiveRecordClass.group(:email_address).having("count(*) > 1").count.keys

The above returns an array of all email addresses that are duplicated in this example's database table (which in Rails would be "active_record_classes").

danielricecodes
  • 3,446
  • 21
  • 23
7
a = ["A", "B", "C", "B", "A"]
a.each_with_object(Hash.new(0)) {|i,hash| hash[i] += 1}.select{|_, count| count > 1}.keys

This is a O(n) procedure.

Alternatively you can do either of the following lines. Also O(n) but only one iteration

a.each_with_object(Hash.new(0).merge dup: []){|x,h| h[:dup] << x if (h[x] += 1) == 2}[:dup]

a.inject(Hash.new(0).merge dup: []){|h,x| h[:dup] << x if (h[x] += 1) == 2;h}[:dup]
benzhang
  • 471
  • 4
  • 11
3

This code will return list of duplicated values. Hash keys are used as an efficient way of checking which values have already been seen. Based on whether value has been seen, the original array ary is partitioned into 2 arrays: first containing unique values and second containing duplicates.

ary = ["hello", "world", "stack", "overflow", "hello", "again"]

hash={}
arr.partition { |v| hash.has_key?(v) ? false : hash[v]=0 }.last.uniq

=> ["hello"]

You can further shorten it - albeit at a cost of slightly more complex syntax - to this form:

hash={}
arr.partition { |v| !hash.has_key?(v) && hash[v]=0 }.last.uniq
2

Here is my take on it on a big set of data - such as a legacy dBase table to find duplicate parts

# Assuming ps is an array of 20000 part numbers & we want to find duplicates
# actually had to it recently.
# having a result hash with part number and number of times part is 
# duplicated is much more convenient in the real world application
# Takes about 6  seconds to run on my data set
# - not too bad for an export script handling 20000 parts

h = {};

# or for readability

h = {} # result hash
ps.select{ |e| 
  ct = ps.count(e) 
  h[e] = ct if ct > 1
}; nil # so that the huge result of select doesn't print in the console
konung
  • 6,908
  • 6
  • 54
  • 79
2
r = [1, 2, 3, 5, 1, 2, 3, 1, 2, 1]

r.group_by(&:itself).map { |k, v| v.size > 1 ? [k] + [v.size] : nil }.compact.sort_by(&:last).map(&:first)
Dorian
  • 22,759
  • 8
  • 120
  • 116
2

each_with_object is your friend!

input = [:bla,:blubb,:bleh,:bla,:bleh,:bla,:blubb,:brrr]

# to get the counts of the elements in the array:
> input.each_with_object({}){|x,h| h[x] ||= 0; h[x] += 1}
=> {:bla=>3, :blubb=>2, :bleh=>2, :brrr=>1}

# to get only the counts of the non-unique elements in the array:
> input.each_with_object({}){|x,h| h[x] ||= 0; h[x] += 1}.reject{|k,v| v < 2}
=> {:bla=>3, :blubb=>2, :bleh=>2}
Tilo
  • 33,354
  • 5
  • 79
  • 106
0
a = ["A", "B", "C", "B", "A"]
b = a.select {|e| a.count(e) > 1}.uniq
c = a - b
d = b + c

Results

 d
=> ["A", "B", "C"]
Amrit Dhungana
  • 4,371
  • 5
  • 31
  • 36
0

If you are comparing two different arrays (instead of one against itself) a very fast way is to use the intersect operator & provided by Ruby's Array class.

# Given
a = ['a', 'b', 'c', 'd']
b = ['e', 'f', 'c', 'd']

# Then this...
a & b # => ['c', 'd']
IAmNaN
  • 10,305
  • 3
  • 53
  • 51
  • 1
    That finds items that exist in both arrays, not duplicates in one array. – Kimmo Lehto Apr 25 '18 at 11:43
  • Thanks for pointing that out. I've changed the wording in my answer. I'll leave it here because it's already proven helpful for some people coming from search. – IAmNaN May 01 '18 at 19:28
0

This runs very quickly (iterated through 2.3mil ids, took less than a second to push dups into their own array)

Had to do this at work with 2.3 mil IDs I imported into a file, I imported list as sorted, also can be sorted by ruby.

list = CSV.read(path).flatten.sort
  dup_list = []
  list.each_with_index do |id, index|
    dup_list.push(id) if id == list[index +1]
  end
  dup_list.to_set.to_a
Charlie
  • 1
  • 1
0
def duplicates_in_array(array)
  hash = {}
  duplicates_hash = {}

  array.each do |v|
    hash[v] = (hash[v] || 0 ) + 1
  end

  hash.keys.each do |hk|
    duplicates_hash[hk] = hash[hk] if hash[hk] > 1
  end

  return duplicates_hash
end

This will return a hash containing each duplicate in the array, and the amount of time it is duplicated

for example:

array = [1,2,2,4,5,6,7,7,7,7]

duplicates_in_array(array)

=> {2=>2, 7=>4}
zabros20
  • 63
  • 4
-1

I needed to find out how many duplicates there were and what they were so I wrote a function building off of what Naveed had posted earlier:

def print_duplicates(array)
  puts "Array count: #{array.count}"
  map = {}
  total_dups = 0
  array.each do |v|
    map[v] = (map[v] || 0 ) + 1
  end

  map.each do |k, v|
    if v != 1
      puts "#{k} appears #{v} times"
      total_dups += 1
    end
  end
  puts "Total items that are duplicated: #{total_dups}"
end
muneebahmad
  • 119
  • 1
  • 3
-2
  1. Let's create duplication method that take array of elements as input
  2. In the method body, let's create 2 new array objects one is seen and another one is duplicate
  3. finally lets iterate through each object in given array and for every iteration lets find that object existed in seen array.
  4. if object existed in the seen_array, then it is considered as duplicate object and push that object into duplication_array
  5. if object not-existed in the seen, then it is considered as unique object and push that object into seen_array

let's demonstrate in Code Implementation

def duplication given_array
  seen_objects = []
  duplication_objects = []

  given_array.each do |element|
    duplication_objects << element if seen_objects.include?(element)
    seen_objects << element
  end

  duplication_objects
end

Now call duplication method and output return result -

dup_elements = duplication [1,2,3,4,4,5,6,6]
puts dup_elements.inspect
  • Code-only answers are generally frowned upon on this site. Could you please edit your answer to include some comments or explanation of your code? Explanations should answer questions like: What does it do? How does it do it? Where does it go? How does it solve OP's problem? See: [How to anwser](https://stackoverflow.com/help/how-to-answer). Thanks! – Eduardo Baitello Oct 21 '19 at 17:17
-5

[1,2,3].uniq!.nil? => true [1,2,3,3].uniq!.nil? => false

Notice the above is destructive

Max
  • 281
  • 3
  • 11