4

In the following code, when would queueT (serial queue) consider “task A” is completed?
The moment when aNetworkRequest switched to another thread?
Or in the doneInAnotherQueue block? ( commented // 1)

In another word, when would “task B” be executed?

let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
    aNetworkRequest.doneInAnotherQueue() { // completed in another thread possibly
        // 1
    }
}

queueT.async { // task B
    print("It's my turn") 
} 

It would much better if you could explain the mechanism how a queue consider a task is completed.
Thanks in advance.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
anb
  • 265
  • 1
  • 8
  • Consider an analogy: Waiting to finish cleaning your room, vs waiting to finish writing "clean your room" on a to-do list. `aNetworkRequest.doneInAnotherQueue` is the latter, and so is `queueT.async`. – Alexander Mar 29 '21 at 18:50

4 Answers4

2

In short, the first example starts an asynchronous network request, so the async call “finishes” as soon as that network request is submitted (but does not wait for that network request to finish).

I am assuming that the real question is that you want to know when the network request is done. Bottom line, GCD is not well suited for managing dependencies between tasks that are, themselves, asynchronous requests. The dispatching the initiation of a network request to a serial queue is undoubtedly not going to achieve what you want. (And before someone suggests using semaphores or dispatch groups to wait for the asynchronous request to finish, note that can solve the tactical issue, but it is a pattern to be avoided because it is inefficient use of resources and, in edge cases, can introduce deadlocks.)

One pattern is to use completion handlers:

func performRequestA(completion: @escaping () -> Void) { // task A
    aNetworkRequest.doneInAnotherQueue() { object in
        ...
        completion()
    }
}

Now, in practice, we would generally use the completion handler with a parameter, perhaps even a Result type:

func performRequestA(completion: @escaping (Result<Foo, Error>) -> Void) { // task A
    aNetworkRequest.doneInAnotherQueue() { result in
        guard ... else {
            completion(.failure(error))
            return
        }
        let foo = ...
        completion(.success(foo))
    }
}

Then you can use the completion handler pattern, to process the results, update models, and perhaps initiate subsequent requests that are dependent upon the results of this request. For example:

performRequestA { result in
    switch result {
    case .failure(let error):
        print(error)

    case .success(let foo): 
        // update models or initiate next step in the process here
    }
}

If you are really asking how to manage dependencies between asynchronous tasks, there are a number of other, elegant patterns (e.g., Combine, custom asynchronous Operation subclass, the forthcoming async/await pattern contemplated in SE-0296 and SE-0303, etc.). All of these are elegant solutions for managing dependencies between asynchronous tasks, controlling the degree of concurrency, etc.

We probably would need to better understand the nature of your broader needs before we made any specific recommendations. You have asked the question about a single dispatch, but the question probably is best viewed from a broader context of what you are trying to achieve. For example, I'm assuming you are asking because you have multiple asynchronous requests to initiate: Do you really need to make sure that they happen sequentially and lose all the performance benefits of concurrency? Or can you allow them to run concurrently and you just need to know when all of the concurrent requests are done and how to get the results in the correct order? And might you have so many concurrent requests that you might need to constrain the degree of concurrency?

The answers to those questions will probably influence our recommendation of how to best manage your multiple asynchronous requests. But the answer is almost certainly is not a GCD queue.

Rob
  • 415,655
  • 72
  • 787
  • 1,044
  • Thanks Rob, you are right. I actually need to send some requests sequentially (to make sure that server has processed the last request before sending a new one). What I'm currently doing is using a DispatchSemaphore(value: 1) with a global queue (it doesn't matter the queue is serial or concurrent, right?). Am I using a good solution? Or you have any better recommendation? – anb Mar 30 '21 at 08:16
  • First, it is really essential to process them sequentially? I.e., are the results from one request truly dependent upon the results of the prior request? The reason I ask is that you pay a surprisingly large performance penalty processing them sequentially (e.g. it can, because of network latency effects, easily make it 3-4 times slower). – Rob Mar 30 '21 at 15:51
  • In answer to your question, like I said above, we tend to avoid the semaphore pattern because it blocks a thread unnecessarily (and the number of worker threads is quite limited), but it works. Patterns like completion handlers, `Operation` subclass, Combine publishers, etc., all avoid that problem (and offer capabilities like cancelable requests, etc.). But if you have just this one serial queue, the dispatch semaphore approach is a simple, if inefficient and limited, solution. – Rob Mar 30 '21 at 15:51
  • 1
    E.g. See [this answer](https://stackoverflow.com/a/32322851/1271826) for discussion of `Operation` and Combine approaches. Or if you need to perform a bunch of requests and build a result set in the same order, see [this answer](https://stackoverflow.com/a/65705491/1271826) for an example (it’s a Parse example, but it’s a common pattern for any asynchronous API where you want to perform requests concurrently but collate the results back into their original order). – Rob Mar 30 '21 at 16:07
  • Finally, if this is your own backend and you have to return an ordered set of results that cannot be performed concurrently, consider making and endpoint that returns a single sorted result set, eliminating network latency effects. It’s going to be less efficient than concurrent patterns, but more efficient than issuing a sequential series of separate queries. – Rob Mar 30 '21 at 16:09
1

You can do a simple check

    let queueT = DispatchQueue(label: "com.test.a")
    queueT.async { // task A
        
        DispatchQueue(label: "com.test2.a").async { // create another queue inside 
            for i in 0..<6 {
                print(i)
            }
        }
      
    }

    queueT.async { // task B
        for i in 10..<20 {
            print(i)
        }
    }

  
}

you'll get different output each run this means yes when you switch thread the task is considered done

Shehata Gamal
  • 98,760
  • 8
  • 65
  • 87
  • yeah, I did a similar check as well before I asked this question. But I don't understand how it works under the hood. – anb Mar 29 '21 at 11:42
1

A GCD work item is complete when the closure you pass returns. So for your example, I'm going to rewrite it to make the function calls and parameters more explicit (rather than using trailing closure syntax).

queueT.async(execute: {
    // This is a function call that takes a closure parameter. Whether this
    // function returns, then this closure will continue. Whether that is before or
    // after running completionHandler is an internal detail of doneInAnotherQueue.
    aNetworkRequest.doneInAnotherQueue(closureParameter: { ... })

    // At this point, the closure is complete. What doneInAnotherQueue() does with
    // its closure is its business.
})

Assuming that doneInAnotherQueue() executes its closure parameter "sometime in the future", then your task B will likely run before that closure runs (it may not; it's really a race at that point, but probably). If the doneInAnotherQueue() blocks on its closure before returning, then closureParameter will definitely run before task B.

There is absolutely no magic here. The system has no idea what doneInAnotherQueue does with its parameter. It may never run it. It may run it immediately. It may run it sometime in the future. The system just calls doneInAnotherQueue() and passes it a closure.

I rewrote async in normal "function with parameters" syntax to make it even more clear that async() is just a function, and it takes a closure parameter. It also isn't magic. It's not part of the language. It's just a normal function in the Dispatch framework. All it does it take its parameter, put it on a dispatch queue, and return. It doesn't execute anything. There's just closures that get put on queues, scheduled, and executed.

Swift is in the process of adding structured concurrency, which will add more language-level concurrency features that will allow you to express much more advanced things than the simple primitives provided by GCD.

Rob Napier
  • 286,113
  • 34
  • 456
  • 610
0

Your task A returns straight away. Dispatching work to another queue is synchronous. Think of the block (the trailing closure) after 'doneInAnotherQueue' as just an argument to the doneInAnotherQueue function, no different to passing an Int or a String. You pass that block along and then you return immediately with the closing brace from task A.

Shadowrun
  • 3,572
  • 1
  • 15
  • 13