0

Here is the deal. I'm attempting to walk the tree. But want to do so concurrently. So each time i walk onto a node i need to concurrently walk all of it's nodes and so on. But. I do not want to wait for the whole DispatchGroup to finish to get results since it's like a worst case scenario in Big O. Instead i want to cancel all the other DispatchWorkItems and leave group for them in the successing one. Tried to do so with counting the task which ended. Obviously i'm doing something wrong or misunderstand how to use this. The code below was written just for purpose of example and to test the idea. Consider the real world situation is that in the DispatchWorkItem you can call recursively another handle function of a current node of tree.

func handle(completion: @escaping (Int) -> Void) {
    var result: Int = 0


    var count = 7
    let group = DispatchGroup()
    let queue = DispatchQueue(label: "q", attributes: .concurrent)

    var items = [DispatchWorkItem]()

    let item1 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...1000 { continue }
        count -= 1
        group.leave()
        print("left 1")
    }
    let item2 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...2000 { continue }
        count -= 1
        group.leave()
        print("left 2")
    }
    let item3 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...6000 { continue }
        count -= 1
        group.leave()
        print("left 3")
    }
    let item4 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...3000 { continue }
        result = 42

        items.forEach { $0.cancel() }

        for _ in 0..<count {
            group.leave()
        }

        print("ok; left 4")
    }
    let item5 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...50000 { continue }
        count -= 1

        group.leave()
        print("left 5")
    }
    let item6 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...6000 { continue }
        count -= 1
        group.leave()
        print("left 6")
    }
    let item7 = DispatchWorkItem(flags: .inheritQoS) {
        for _ in 0...8000 { continue }
        count -= 1
        group.leave()
        print("left 7")
    }

    items.append(item1)
    items.append(item2)
    items.append(item3)
    items.append(item4)
    items.append(item5)
    items.append(item6)
    items.append(item7)

    for item in items {
        group.enter()
        queue.async(execute: item)
    }

    group.notify(queue: queue) { 
        return
    }
}

test() {
    handle { result in
        print(result)
    }

}
Vadim Kotov
  • 8,084
  • 8
  • 48
  • 62
Igor Muzyka
  • 410
  • 7
  • 13
  • "walk the tree"? "walk all of it's nodes"? What do they mean? – El Tomato Aug 22 '17 at 01:16
  • tree - data structure. as a binary tree but with no limitation for number of child nodes – Igor Muzyka Aug 22 '17 at 01:17
  • By the way, I don't know how intensive your individual nodes are, but remember that the performance gain of parallel processing can easily be offset (if not completely wiped out and reversed) by the overhead of coordinating all of these threads. Make sure you've got sufficient processing going on each dispatched piece of code. – Rob Aug 23 '17 at 01:08

2 Answers2

0

You can't be reading from and writing to your count variable from multiple threads at once. You need to put a mutex lock on the count. You have an unstable situation trying to access and/or change count from multiple threads. Also, you should design this to not need counting at all.

Smartcat
  • 2,834
  • 1
  • 13
  • 25
0

A couple of thoughts:

  1. If you want to cancel a time consuming task, you need to periodically check isCancelled. See https://stackoverflow.com/a/38372384/1271826.

  2. If you are going to update count or items from multiple threads, you have to synchronize that interaction (e.g. with a lock or dedicated serial queue). Int and Array are not thread-safe, so you'll have to manage that yourself.

  3. Keeping track of count and using DispatchGroup and keeping track of your own collection of DispatchWorkItem references is going to take a little work. Operation queues get your out of all of that. That having been said, if maximum efficiency is required, then perhaps you want to stay within dispatch queues, but it's just a lot more work.

Rob
  • 415,655
  • 72
  • 787
  • 1,044