0

I am implementing a thread-safe bounded blocking queue. There are two ways in which I can think of designing it.

Approach 1:

class BoundedBlockingQueue {
    int capacity;
    Queue<Integer> queue;

    public BoundedBlockingQueue(int capacity) {
        this.capacity = capacity;
        this.queue = new LinkedList();
    }

    public void enqueue(int element) throws InterruptedException {
        while(queue.size() == capacity);
        queue.add(element);
    }

    public int dequeue() throws InterruptedException {
        while(queue.size() == 0);
        return queue.remove();
    }

    public int size() {
        return queue.size();
    }
}

Here, while enqueuing, process will keep on looping the while loop (notice the immediate ; right after the while condition) and proceed ahead only when queue.size() becomes lesser than capacity. Similar logic is there for dequeuing when queue.size() equals 0.

The second way to design the same thing would be by using the synchronized keyword, as follows:

Approach 2:

class BoundedBlockingQueue {
    int capacity;
    Queue<Integer> queue;

    public BoundedBlockingQueue(int capacity) {
        this.capacity = capacity;
        this.queue = new LinkedList();
    }

    public void enqueue(int element) throws InterruptedException {
        synchronized(queue){
            while(queue.size() == capacity) queue.wait();
            queue.add(element);
            queue.notifyAll();
        }
    }

    public int dequeue() throws InterruptedException {
        synchronized(queue){
            while(queue.size() == 0) queue.wait();
            int val = queue.remove();
            queue.notifyAll();
            return val;
        }
    }

    public int size() {
        return queue.size();
    }
}

Here, we are making the process wait for the same situations and it proceeds ahead only when notified by another process to do so. The only difference is that we are using the synchronized keyword here in Approach 2, but in Approach 1 we are not.

The observation is that Approach 1 is taking up significantly more amount of runtime than Approach 2. Why is that so? Isn't the underlying logic of both the approaches exactly the same? Why is it taking more runtime for Approach 1 when compared to Approach 2?

Any help would be highly appreciated.

Sanjiban Bairagya
  • 704
  • 2
  • 12
  • 33
  • 4
    We'd need to see the test harness you used to time these routines. – markspace Oct 27 '19 at 17:15
  • 3
    Some advice on making time tests can be found here: https://stackoverflow.com/questions/504103/how-do-i-write-a-correct-micro-benchmark-in-java – markspace Oct 27 '19 at 17:16
  • I'm submitting the code in https://leetcode.com/problems/design-bounded-blocking-queue/ and seeing that with Approach 1, it is timing out, but with Approach 2, it is getting accepted with runtime that is faster than 99.07% of all Java submissions in that platform. Does that help? – Sanjiban Bairagya Oct 27 '19 at 17:21
  • 2
    Not really. Whenever you have a problem / bug / need to understand something, the first step is to reproduce it in your own environment. I have no idea how LeetCode works under the hood, so we need to see some code that at least reproduces the problem. Tangent: re-reading your approach 1 code, it does not appear to be thread safe, at all. Maybe it's just messing up. – markspace Oct 27 '19 at 17:26
  • Approach 1: I don't get where this is in any way Thread-safe... and busy waiting strategies usually are really bad practice too... Plus the running Thread will block, once either the producer or consumer stops but the other does not... So you cannot close you app in a clean way, unless you specify some special close() method, for checknig/setting flags... To Approach 2... I do not believe that using synchronized AND wait/notify gives you any real advantage, because for wait() to return, the mutexes have to be unlocked/locked again anyway... why not use some java default implementations? – JayC667 Oct 27 '19 at 17:44
  • And why not study the different varieties of bounded blocking queues in the jdk? – Curiosa Globunznik Oct 27 '19 at 17:48
  • One more hint: there are really good implamentations for (De-)Queues out there that are completely thread-safe, but only have to use mutexes when adding to an emtpy queue or pulling the last element from the queue. So unless you use it like a Stack (LIFO), this will be really really fast, especially for FIFO-Operations – JayC667 Oct 27 '19 at 17:48
  • Thanks guys. My bad. Closing this. – Sanjiban Bairagya Oct 27 '19 at 18:18
  • A very good and very short explanation can be found in the Java Language Specification [here](https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.3). – VGR Oct 27 '19 at 20:43

2 Answers2

1

Your first approch is not thread-safe but has a bug (the methods might see uninitialized values), because your fields are not final. I can only strongly advice you to read the specification before trying to implement your own thread-safe classes.

Maybe capacity is still initialized to the default value 0 when enqueue is called and that's why your code times out, who knows.

The second point is that busy waiting is usually considered a really bad thing. Plus you cannot guarantee that the other threads get the correct value when calling size(). They might see a stale value and try to insert or remove items even though the queue is full or empty, respectively.

Take a look at Lock and Condition if you want to implement thread-safe queue-like structures.

The third point is, that instead of using an empty while construct, you should use Thread.onSpinWait.

I'm not sure if your second approach is correct either, because queue is not final and might be null when synchronized(queue) is called. But I'm not sure if this really can happen here. But there is no reason for queue to be not final anyway.

Alex R
  • 3,139
  • 1
  • 18
  • 28
1

The above code has a race condition, so the first example does not provide the same synchronization guarantees as the second one.

  • Imagine threads number 1 and 2 running on the line while(queue.size() == 0);.
  • Thread number 3 thread calls enqueue()
  • Threads 1 and 2 (simultaneously) observe that the condition of the while loop is not true since queue size is now 1, and both at the same time try to call queue.remove(). Here one of the threads will throw an exception (even if LinkedList were thread safe). I assume that this is not the contract that you are trying to achieve.

In the first solution you have applied spinlock (busy waiting) to solve the synchronization problem. This can be more effective than OS dependent synchronization constructs (semaphore, mutex...) only on specific cases. More about this here

samvel1024
  • 1,123
  • 4
  • 15
  • 39