6

I'd like to optimize following code for conciseness.

x1.each { |x| 
  x2.each { |y|
    ....
    xN.each { |z|
      yield {}.merge(x).merge(y)...... merge(z)
    }
  }
}

Assume x1, x2, ..., xN are Enumerator objects.

  1. The above is not concise
  2. It works with x1, x2 as Arrays, but not as Enumerators
    • Because enumerator iterators should be reset for inner loops

I tried this but without success:

[x1, x2, ..., xN].reduce(:product).map { |x| x.reduce :merge }

Any recommendations?

UPDATE

currently solved with:

[x1, x2, ..., xN].map(:to_a).reduce(:product).map { |x| 
  yield x.flatten.reduce(:merge) 
}
Uri Agassi
  • 36,848
  • 14
  • 76
  • 93
Alex
  • 1,210
  • 8
  • 15

1 Answers1

5

I'll start with point #2:

  • At least with the Enumerators I've tested ([{a: 1}, {a: 2}, {a: 3}].each) your code worked - apparently Enumerator#each either rewinds at the end, or it uses its own pointer.
  • To do what you want, you need to iterate so many times over the Enumerator objects (especially the inner ones) that calling to_a on each at first will not increase your time complexity at all (it will stay O(n1*n2*...*nk)

With point #1, if calling to_a is out of the question, you can consider recursion:

def deep_merge(enum = nil, *enums)
  if enum.nil?
    yield({})
  else
    enum.each do |x|
      deep_merge(*enums) do |h|
        yield h.merge(x)
      end
    end
  end
end

now you can call deep_merge(x1, x2, ... xN) and get the needed outcome...

Uri Agassi
  • 36,848
  • 14
  • 76
  • 93
  • 1
    Even if OP can work with `.to_a`, I think that any structure that (like this answer) avoids creating a giant product is a good result. You can iterate 10 million items reasonably fast, but holding 10 million hashes in memory in order to do so could impact performance. – Neil Slater May 07 '14 at 14:44