11

Suppose I have multiple threads blocking on a call to pthread_mutex_lock(). When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock? That is, are calls to pthread_mutex_lock() in FIFO order? If not, what, if any, order are they in? Thanks!

shanet
  • 7,246
  • 3
  • 34
  • 46

3 Answers3

6

When the mutex becomes available, does the first thread that called pthread_mutex_lock() get the lock?

No. One of the waiting threads gets a lock, but which one gets it is not determined.

FIFO order?

FIFO mutex is rather a pattern already. See Implementing a FIFO mutex in pthreads

Community
  • 1
  • 1
LihO
  • 41,190
  • 11
  • 99
  • 167
  • By FIFO I just meant is the first thread to call `pthread_mutex_lock` the first to get the lock when it's available and so on for subsequent threads. Guess I'm implementing my own. Thank you. – shanet Feb 18 '13 at 23:33
  • @shanet: *"I just meant is the first thread to call `pthread_mutex_lock ` the first to get the lock when it's available"* - yes, that's exactly what FIFO mutex is about. – LihO Feb 18 '13 at 23:36
  • How about std::mutex? – Zhang Mar 06 '23 at 13:05
2

"If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy shall determine which thread shall acquire the mutex."

Aside from that, the answer to your question isn't specified by the POSIX standard. It may be random, or it may be in FIFO or LIFO or any other order, according to the choices made by the implementation.

autistic
  • 1
  • 3
  • 35
  • 80
0

FIFO ordering is about the least efficient mutex wake order possible. Only a truly awful implementation would use it. The thread that ran the most recently may be able to run again without a context switch and the more recently a thread ran, more of its data and code will be hot in the cache. Reasonable implementations try to give the mutex to the thread that held it the most recently most of the time.

Consider two threads that do this:

  1. Acquire a mutex.
  2. Adjust some data.
  3. Release the mutex.
  4. Go to step 1.

Now imagine two threads running this code on a single core CPU. It should be clear that FIFO mutex behavior would result in one "adjust some data" per context switch -- the worst possible outcome.

Of course, reasonable implementations generally do give some nod to fairness. We don't want one thread to make no forward progress. But that hardly justifies a FIFO implementation!

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
  • Your idea of "reasonable" is not compatible with most. In particular it's not compatible with realtime requirements; a single thread that rapidly locks and unlocks the mutex will prevent forward progress in any other thread. FIFO order within each priority level is the normal way mutexes should behave for robust realtime usage. – R.. GitHub STOP HELPING ICE Feb 18 '13 at 23:37
  • @R.. POSIX mutexes are not, and are not intended to be, realtime. They are intended to maximize the forward progress of the process as a whole without being manifestly unfair. – David Schwartz Feb 18 '13 at 23:38
  • Per POSIX, "The pthread_mutex_unlock() function shall release the mutex object referenced by mutex. The manner in which a mutex is released is dependent upon the mutex's type attribute. If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, **the scheduling policy shall determine which thread shall acquire the mutex**." (emphasis mine) – R.. GitHub STOP HELPING ICE Feb 19 '13 at 02:08
  • BTW, I hardly think you can say POSIX mutexes are not intended to be realtime, when they have priority inheritance protocol and priority ceiling, features designed specifically for realtime use. – R.. GitHub STOP HELPING ICE Feb 19 '13 at 02:11
  • @R..: I'm talking about default mutexes, which I assume is what the OP is asking about. – David Schwartz Feb 19 '13 at 16:21
  • Even those are required to obey scheduling policy. – R.. GitHub STOP HELPING ICE Feb 19 '13 at 16:34
  • @R..: The definition of "scheduling policy" is basically "whatever decides which thread gets to run". So all that's saying is basically "whatever decides which thread gets to run decides which thread gets to run". So it's not a "requirement" in any reasonable sense of that word. – David Schwartz Feb 19 '13 at 16:42
  • It's just trying to say that the mutex implementation need not make any effort to control which thread gets the mutex. There's no requirement that all threads become ready-to-run at the same time anyway. – David Schwartz Feb 19 '13 at 16:48
  • If different priorities are involved it **must** make such an effort. It's less clear to me if all threads are same priority. – R.. GitHub STOP HELPING ICE Feb 19 '13 at 17:04
  • Yet, it is very disappointing that man pages on pthread_mutex_lock() say absolutely nothing about this. Yes, it is not specified in POSIX, but they should also mention this, that the order of waking up the waiting threads can be arbitrary. And that also means that given a sufficient supply of threads that are trying to lock a mutex, a starvation is possible, and the waiting thread is not guaranteed to acquire the lock in finite time AT ALL. This is very important piece of information and ALL manpages of ALL Unix systems I checked totally ignore this. No surprise space missions fail often. – Palo Nov 28 '21 at 05:40