4

I was asked the following question in an interview:

There is a fixed size queue of tasks. Threads want to enqueue task. If the queue is full they should wait. The threads order should remain: if thread1 came with task1 and after that thread2 came with task2, task1 should enter the queue before task2.

Other threads want to dequeue tasks and execute it. If the queue is empty, they should wait, and also their order should remain: If t3 came before t4, t3 should enqueue task before t4.

How to achieve this (in pseudo-code)?

alk
  • 69,737
  • 10
  • 105
  • 255
mayap
  • 569
  • 5
  • 19

3 Answers3

0

To synchronize access to a finite number of resources, you typically use a semaphore. Google for it to get your own idea.

The difficult part is to preserve the order of the blocking threads.

I found this project which contains a FifoSemaphore in C#: http://dcutilities.codeplex.com

VMAtm
  • 27,943
  • 17
  • 79
  • 125
Jan
  • 15,802
  • 5
  • 35
  • 59
0

If the producer threads are waiting on a numEmptySpaces semaphore for access to the queue, this behaviour would probably happen anyway since it's unreasonable for the semaphore wait queue to be implemented in anything other than FIFO, but it's not guaranteed on most semaphore implementations.

Guaranteeing this kind of behaviour is very awkward because it's difficult to define the requirement of 'thread order':

How can you define which thread arrived first?

If the 'first thread' acquires some kind of lock that prevents other threads from proceeding, subsequent threads my herd in 'immediately' and so be subject to whatever lock-queue ordering is supplied by the OS.

The only thing I can think of is to force every producer thread to acquire a lock-free time-stamp or sequence-number before attempting to lock/queue anything. This could be done with a 'normal' atomic increment instruction. When a producer subsequently 'gets in' by acquiring a 'numEmptySpaces' unit and locks the queue, it can enqueue itself in sequence-number order onto the queue.

I'm not sure if a standard BlockingCollection can be used for this. You may be able to 'OrderBy' the entries by sequence number, but I'm not sure that this operation locks the queue - it should do, but.. Besides, the sequenceNo would have to be added as a private memeber in a BlockingCollection descendant and the atomic increment result maintained as state for each task - you would have to add it to the Task members.

I would be tempted to build by own BlockingQueue class with a 'normal' queue, couple semaphores and a mutex to implement this, inserting new tasks in sequence-number order into the queue once the numEmptySpaces unit and queue mutex has been acquired. The atomic increment result could then be assembled into a stack/auto var.

This might be justified as an interview question, but I would have to be threatened with dismissal to actually implement it in production code. It's very difficult to think of a situation where it might be essential. The downsides of extra overhead and contention outweigh the dubious upside in everything I can think of.

I have similar reservations about attempting to explicitly maintain any sort of ordering at the dequeue/execute end. It would be messy to try and ensure that some 'checkpoint' in dequeued tasks was reached in sequence-number order. It would require cooperation from the task, which would need a private synchro object member to signal when it had reached its checkpoint. Don't ever try it:)

VMAtm
  • 27,943
  • 17
  • 79
  • 125
Martin James
  • 24,453
  • 3
  • 36
  • 60
0
  1. Simple solution In .NET 4.0 were introduced namespace System.Collections.Concurrent, class from it are working quite right - I couldn't achieve some error from them.
    ConcurrentQueue and BlockingQueue are the place to start your research. But I think your question is not about the standart solution - that's bad answer on the interview, so:
  2. Solution based on Jeffrey Richter's book information:
    Base code (C#):

    internal sealed class SynchronizedQueue<T> {
        private readonly Object m_lock = new Object();
        private readonly Queue<T> m_queue = new Queue<T>();
    
        public void Enqueue(T item) {
            Monitor.Enter(m_lock);
            // After enqueuing an item, wake up any/all waiters
            m_queue.Enqueue(item);
            Monitor.PulseAll(m_lock);
            Monitor.Exit(m_lock);
        }
    
        public T Dequeue() {
            Monitor.Enter(m_lock);
            // Loop while the queue is empty (the condition)
            while (m_queue.Count == 0)
                Monitor.Wait(m_lock);
            // Dequeue an item from the queue and return it for processing
            T item = m_queue.Dequeue();
            Monitor.Exit(m_lock);
            return item;
        }
    }
    

    This class is a thread-safe, but still doesn't check the order - and here is a many ways to implement that. From the same book:

    ConcurrentQueue and ConcurrentStack are lock-free; these both internally use Interlocked methods to manipulate the collection.

    So, you must remove Monitor class usage, and provide check for your thread to being next one to enqueue item. This can be done by maintaining the number of current adders and current queue length in the private field. You should make this fields volatile.
    You should use Interlocked.Exchange to get your current adders and Interlocked.Read to get your current queue length.
    After that, you have unique number for your thread - current length + current adders. Use SpinWait class to spin around while current length will not became equal to your number, after that enqueue item, and leave the Enqueue method.

I strongly recommend you to study this book chapters about multithreading and locks - you'll be much more prepared for this type questions in your life. Also try to search similar questions here. For example:

Creating a blocking Queue<T> in .NET?

Community
  • 1
  • 1
VMAtm
  • 27,943
  • 17
  • 79
  • 125