0

due to the way the iteration is performed and new entries are added, If one iterates over one set and copies to another set, the performance is very slow. Consider the following code snippet:

        final int num = (int) (1024 * 1024 * 2.1);
        final HashLongSet set1 = HashLongSets.newMutableSet();
        for (int i = 0; i < num; i++) {
            final long oid = r.nextLong();
            set1.add(oid);
        }

        System.out.println("populated first set..");

        final HashLongSet set2 = HashLongSets.newMutableSet();
        final LongCursor cursor = set1.cursor();
        while (cursor.moveNext()) {
            set2.add(cursor.elem());
        }
        System.out.println("populated first set..");

Is there any way to accelerate the population of second set in this case? I understand that if I knew the expected set size upfront, I could have used it on second set construction and made things faster - but that is not always possible - I could have inserted some conditions in between that determined which output set the value needs to be inserted to, or thrown away completely.

Marat Safin
  • 1,793
  • 20
  • 28

1 Answers1

1

Is it faster if you create the second HashLongSet by using the first set as parameter in the creating method:

 final HashLongSet set2 = HashLongSets.newMutableSet(set1);

UPDATE
Depending on your comment, what if you do something like:

  1. Create as many HashLongSets you need (countSets) with an initialCapacity of (int) set1.size() / countSets
  2. Then run your loop for dividing your data of set1 onto the other sets. In every loop you have to check if the initialCapacity is reached and expand the corresponding HashLongSet with another initialCapacity: set2.ensureCapacity(set2.size() + initialCapacity)
Georg Leber
  • 3,470
  • 5
  • 40
  • 63
  • I checked and the above suggestion is quite fast. However, on tracing the code - I can see that this method initializes a new set with expectedSize = set1.size. This eliminates collisions; however, the scenario I was describing is not allowed to know expected size in advance. There can be filters or conditional puts etc. One way is to copy out keys in a long[] and then shuffle it and iterate on shuffled long[]. This adds 40ms per million entry set but eliminates collisions all together. – Gaurav Agarwal Jan 10 '18 at 12:15
  • But do you work with the sets in sequence or concurrently? Because your code does not show this, but in your comment you mention conditional puts, filters, etc. When do this conditions run? – Georg Leber Jan 10 '18 at 14:01
  • Let's say I have following logical structure - with sequential iteration: `input := set1 for k,v in set1 if cond1(k) { set2.add(k) } else if (cond2(k)) { set3.add(k) } ...` Now, either I need to initialize set2, set3... with size of set1 - which is quite wasteful or else, I will hit the performance issues. It appears that problem is not per se with Koloboke - any open address hashing scheme will suffer this. Unless the iteration order is shuffled, this kind of iteration and addition will result in worst-case collision scenario. – Gaurav Agarwal Jan 10 '18 at 15:42