-1

I have 3 threads, resumed at the same time, calling the same function with different arguments. How can I force the thread to leave Critical Section and pass it to another thread?

When I run the code below, the while loop is called many times until another thread enters the Critical Section (and it also loops many times).

DWORD WINAPI ClientThread(LPVOID lpParam)
{
    // thread logic 
    while(true)
    {
        EnterCriticalSection(&critical);
        // thread logic 
        LeaveCriticalSection(&critical);
        Sleep(0);
    }
    // thread logic 
    return 0;
}

In other words, how can I prevent a thread from instantly reentering a section again?

Mike
  • 373
  • 4
  • 10
  • 6
    If you want it to run sequentially, why do you need multiple threads? Just run your code sequentially in one thread. – kichik Mar 29 '20 at 19:39
  • It does not have to be sequentially, but right now the order is like A->A->A->A->A->A->A->A->B->B->B->B->B->B->B->B. I feel like too much resources are going to just one thread. – Mike Mar 29 '20 at 19:44
  • 3
    Still not sure why you need threads for this, but maybe [this other question](https://stackoverflow.com/questions/23515630/windows-critical-sections-fairness) will help. – kichik Mar 29 '20 at 19:46
  • if all threads execute the same code - what different - which thread/in which order. and if only **single** thread at time can execute code - you not need have more than 1 thread, which run in `ClientThread` – RbMm Mar 29 '20 at 19:47
  • 4
    This sounds very much like an [XY problem](http://xyproblem.info/) to me. – Andreas Wenzel Mar 29 '20 at 20:21
  • 3
    You may want to read about [ticket locks](https://en.wikipedia.org/wiki/Ticket_lock). However, I doubt that this is what you need, because I am still convinced it is an XY problem. – Andreas Wenzel Mar 29 '20 at 20:34
  • @AndreasWenzel I use threads to make calculations faster and for some operations, I have to use shared resources. Functions are not the same, different operations are called based on the argument. I use the while(true) loop because it is an NP-Hard problem and I have time-limited-resources. And of course, there is more than just while loop, I just didn't want to paste unnecessary-to-solve-the-problem code there. – Mike Mar 29 '20 at 21:23
  • 6
    @Mike: Threads don't need synronization for accessing read-only shared resources. And if every thread has its own copy of writable resources, then there is no need for synchronization for those resources, either. Synchronization is only required for writable resources which are shared. If accessing these resources causes so much thread contention that lock fairness becomes a major issue, then it is likely that your code would run faster if you only had a single thread. – Andreas Wenzel Mar 29 '20 at 22:00

2 Answers2

2

You can't ask a thread directly to leave the critical section. The thread will leave it, when it has finished executing.

So the only way would be to prevent it from entering the critical section, or "ask" it to finish early. Eg. by checking in the section continuously for an atomic_flag and aborting stopping the thread's operation if it has been checked.

If you want to prevent a thread from reentering a section directly after it has left, you could yield it, this will reschedule the execution of threads. If you want an exact ordering from threads (A->B->C->D->A->B ...) you need to write a custom scheduler or a custom "fair_mutex" who detects other waiting threads.

Edit:
Such a function would be BOOL SwitchToThread(); doc

Community
  • 1
  • 1
Fabian Keßler
  • 563
  • 3
  • 12
  • Yes, I want to prevent a thread from reentering a section directly. I don't care which thread will run next. I tried both YieldProcessor() and SwitchToThread() but they do nothing. – Mike Mar 29 '20 at 20:12
  • @Mike: If that's what you want, then you're going to have to actually code that logic, as Fabian said. There's not just a simple function that will ensure that for you. Also, you should pretty much never be in a situation where this is the behavior you want (especially on modern, multi-core CPUs), so you're probably implementing the wrong design. – Nicol Bolas Mar 29 '20 at 22:18
1

As mentioned in another answer, you need Fair Mutex, and Ticket Lock may be one of ways to implement it.

There's another way, based on binary semaphore, and it is actually close to what Critical Section used to be. Like this:

class old_cs
{
public:
  old_cs()
  {
     event = CreateEvent(NULL, /* bManualReset = */ FALSE, /* bInitialState =*/ TRUE, NULL);
     if (event == NULL) throw std::runtime_error("out of resources");
  }

  ~old_cs()
  {
     CloseHandle(event);
  }

  void lock()
  {
    if (count.fetch_add(1, std::memory_order_acquire) > 0)
      WaitForSingleObject(event, INFINITE);
  }

  void unlock()
  {
    if (count.fetch_sub(1, std::memory_order_release) > 1)
      SetEvent(event);
  }

  old_cs(const old_cs&) = delete;
  old_cs(old_cs&&) = delete;
  old_cs& operator=(const old_cs&) = delete;
  old_cs& operator=(old_cs&&) = delete;
private:
  HANDLE event;
  std::atomic<std::size_t> count = 0;
};

You may find the following in Critical Section Objects documentation:

Starting with Windows Server 2003 with Service Pack 1 (SP1), threads waiting on a critical section do not acquire the critical section on a first-come, first-serve basis. This change increases performance significantly for most code. However, some applications depend on first-in, first-out (FIFO) ordering and may perform poorly or not at all on current versions of Windows (for example, applications that have been using critical sections as a rate-limiter). To ensure that your code continues to work correctly, you may need to add an additional level of synchronization. For example, suppose you have a producer thread and a consumer thread that are using a critical section object to synchronize their work. Create two event objects, one for each thread to use to signal that it is ready for the other thread to proceed. The consumer thread will wait for the producer to signal its event before entering the critical section, and the producer thread will wait for the consumer thread to signal its event before entering the critical section. After each thread leaves the critical section, it signals its event to release the other thread.

So the algorithm inthis post is a simplified version of what Critical Section used to be in Windows XP and earlier.

The above algorithm is not a complete critical section, it lack recursion support, spinning, low resources situation handling.

Also it relies on Windows Event fairness.

Alex Guteniev
  • 12,039
  • 2
  • 34
  • 79