Long story short: in the general case, you can't know which queue you're running on, because queues target other queues in a hierarchical tree. (This is likely why dispatch_get_current_queue
is deprecated -- there's really just not one answer to the question, "what queue am I running on?") You can figure out if you're on the main queue by calling [NSThread isMainThread]
. The current recommended mechanism for tracking which queue you're on is to use dispatch_queue_set_specific()
and dispatch_get_specific()
. See this question for all the gory details.
You said the following quote is not clear:
What happens is that all of these background threads start queueing up
operations and saturate the thread pool. Meanwhile, the OS, on the
main thread, in the process of doing some of its own work, calls
dispatch_sync, which blocks waiting for a spot to open up in the
thread pool. Deadlock.
Let me try to re-state: There are a limited number of threads available to GCD. When you dispatch a block and it begins executing, it is "consuming" one of these threads until it returns. If the executing block does blocking I/O or otherwise goes to sleep for a long period of time, you can find yourself in a situation where all the available threads are consumed. If your main thread then attempts to dispatch_sync
a block to a different queue, and no more threads are available, it is possible that your main thread will block waiting for a thread to be available. ("Starved" would have been a better word than "saturated" in the original quote.)
In the general case, this situation is not, strictly speaking, a "deadlock", because other blocks might eventually complete, and the threads they were consuming should become available. Unfortunately, based on empirical observation, GCD needs to have at least one thread available in order to wake up queues that are waiting for a thread as other blocks complete. Hence, it is possible, in certain pathological situations, to starve out GCD such that it is effectively deadlocked. Although the example on the linked page probably does meet the strict definition of a deadlock, not all cases in which you end up frozen do; That doesn't change the fact that you're frozen, and so is probably not worth debating.
The standard advice on this is, somewhat unhelpfully, "Don't do that." In the ideal, you should never submit a block to GCD that could do blocking I/O or go to sleep. In the real world, this is probably at least half of what people use GCD for. What can I say? Concurrency is hard. For more detail on the thread limits associated with GCD you can check out my answer over here.
It sounds like what you're trying to achieve here is a recursive lock. The short version of this is that GCD queues are not locks. They can be used in a way that approximates a lock for certain situations, but locks and task queues are two different things, and are not 100% interchangeable.
I have come to believe that it is not possible to approximate a recursive lock using GCD queues, in a way that works for all possible arrangements of queue targeting, and without incurring a greater performance penalty than would be incurred by using an old-fashioned recursive lock. For an extended discussion of recursive locking with GCD, check out my answer over here.
EDIT: Specifically to the code snippet in the question, here's what happens. Imagine that the thread limit is 4. (This is not the actual limit, but the principal is the same no matter what the exact limit is.) Here's the snippet, for reference:
for (int i=0; i < A_BIG_NUMBER; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
dispatch_sync(dispatch_get_main_queue(), ^{
// do something in the main queue
});
}
The first thing to remember, is the main thread is a run loop, with many things happening on it that you did not directly cause to happen. Let's assume, for the moment, that the main thread is busy doing some drawing.
The first pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There are 3 threads left in the GCD thread pool at this point.
The second pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There are 2 threads left in the GCD thread pool at this point.
The third pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There is 1 thread left in the GCD thread pool at this point.
The fourth pass of the loop will take 1 thread out of the thread pool and run the enqueued block. That thread will immediately block, because it's waiting to do something on the main thread, but the main thread is still busy drawing. There are now no threads left in the GCD thread pool at this point. None of these four background threads can proceed until the main thread becomes available to run their dispatch_sync
blocks.
Now, over on the main thread, in the course of drawing, in code not visible to you (in AppKit or whatever) there's a call to dispatch_sync
to synchronously perform an operation on some other background queue. This code would normally take a thread from the thread pool and do its work, and eventually the main thread would continue. But in this case, there are no threads left in the pool. All the threads there ever were are waiting for the main thread to be free. But the main thread is waiting for one of those background threads to be free. This is a deadlock. Why would some OS code be doing a dispatch_sync
to a background queue? I have no idea, but the claim in the linked document is that this does occur (and I would totally believe it.)