0

I have two threads, both with a loop that does some work. The only real behavioral difference between the two is that Thread 1 does a sleep_for prior to locking its mutex in the loop, Thread 2 does not. The goal is to keep Thread 1 light on the system: I do not want it hammering the CPU 100%. Thread 2 is temporary in nature, so it shouldn't sleep as it's working. But the amount of work Thread 2 does (per iteration) is heavy (hence the sleep_for in my example after the lock, to simulate a long-held lock due to real work).

Thread 2 gets all the execution time. In fact, Thread 1 only gets 1 iteration in after 4 seconds. What I expected was for Thread 1 to lock the std::mutex, wait for the unlock, and then it acquires the lock. Does the std::mutex not act as a sort-of queue? Meaning, even though Thread 2 doesn't sleep before acquiring the lock, when it attempts to lock on the next iteration, it ends up having to wait for the lock to be released since Thread 1 was next in line to acquire it? Even though the sleeps are in there, I expected each Thread 1 and Thread 2 to each get a turn.

Where am I wrong here? What is the explanation of this behavior? And what are and aren't the guarantees provided by the C++ standard?

#include <thread>
#include <atomic>
#include <mutex>
#include <iostream>
#include <chrono>

using namespace std::chrono_literals;

std::unique_lock<std::mutex> LockMutex()
{
    static std::mutex m;
    return std::unique_lock<std::mutex>{m};
}

int main()
{
    std::atomic_bool running{true};

    std::thread t1{[&] {
        while (running)
        {
            std::this_thread::sleep_for(100ms);
            auto lock = LockMutex();
            std::cout << "Thread 1 :: Time Slice\n";
        }
    }};

    std::thread t2{[&] {
        while (running)
        {
            auto lock = LockMutex();
            std::cout << "Thread 2 :: Time Slice\n";
            std::this_thread::sleep_for(100ms);
        }
    }};

    std::this_thread::sleep_for(4s);

    running = false;
    t1.join();
    t2.join();
}

Output I get is:

Start

Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 2 :: Time Slice
Thread 1 :: Time Slice

0

Finish

Live code sample here for testing.

void.pointer
  • 24,859
  • 31
  • 132
  • 243
  • 1
    If thread 2 is running, still has plenty of time slice left, wants the mutex, and the mutex is not locked, why would a sane implementation force thread 2 to stop running and then make thread 1 start up, on completely cold caches, just on the assumption that maybe you would prefer thread 1 to run over thread 2 with no indication that you have any such preference? – David Schwartz Mar 08 '19 at 20:33
  • Your code does not reflect your question. Your threads don't do some work, they sleep. That is, they are interactive threads (threads that wait for something to happen, quickly react to it, and then go back to waiting again). Interactive threads are treated very differently by the scheduler from threads that do work and they should never hold a lock while they're sleeping. – David Schwartz Mar 08 '19 at 20:41
  • Basically, thread 2 is both very, very polite (reacting very quickly and then yielding the CPU) and very, very rude (blocking other threads 99.99% of the time forever). Given its rudeness, it makes sense to just let it finish all the work it's going to do so it gets out of the way of other threads. Generally, the best thing an implementation can do with a rude thread is let a rude thread just get its job done so it stops being rude and everything else can be efficient. But yours is trying to do an infinite amount of work. – David Schwartz Mar 08 '19 at 20:43
  • @DavidSchwartz But thread 1 isn't holding a lock while sleeping. It sleeps, then locks the mutex. And at some point I'd expect that a thread waiting on a mutex to be unlocked has a higher priority. Of course during sleep, I do not expect the thread to have priority. But the sleep and wait on mutex are 2 different things in my mind. – void.pointer Mar 09 '19 at 23:40
  • I do appreciate the comments but I have a feeling all of this is "unspecified" from a C++ standpoint. The behaviors might be completely different between Linux and Windows, for example. I am looking for a platform-agnostic solution, but I guess that does require a queue (sort of like how Boost.Asio's Strand works) or a condition variable to signal to other threads it's "their turn". But I shouldn't depend on the OS to behave a certain way for me. – void.pointer Mar 09 '19 at 23:42
  • Thread 2 is holding a lock while sleeping. That makes it both very rude and very polite. The thread waiting on a mutex to be unlocked can have a higher priority, but so what? It never contends for the mutex because thread 2 never leaves it unlocked. What are you looking for a platform agnostic solution for? Just don't sleep while holding a lock and don't write code to do work you don't want done if that work could block work you do want done. – David Schwartz Mar 10 '19 at 00:04

0 Answers0