I am implementing a thread-safe bounded blocking queue. There are two ways in which I can think of designing it.
Approach 1:
class BoundedBlockingQueue {
int capacity;
Queue<Integer> queue;
public BoundedBlockingQueue(int capacity) {
this.capacity = capacity;
this.queue = new LinkedList();
}
public void enqueue(int element) throws InterruptedException {
while(queue.size() == capacity);
queue.add(element);
}
public int dequeue() throws InterruptedException {
while(queue.size() == 0);
return queue.remove();
}
public int size() {
return queue.size();
}
}
Here, while enqueuing, process will keep on looping the while loop (notice the immediate ;
right after the while condition) and proceed ahead only when queue.size() becomes lesser than capacity. Similar logic is there for dequeuing when queue.size() equals 0.
The second way to design the same thing would be by using the synchronized
keyword, as follows:
Approach 2:
class BoundedBlockingQueue {
int capacity;
Queue<Integer> queue;
public BoundedBlockingQueue(int capacity) {
this.capacity = capacity;
this.queue = new LinkedList();
}
public void enqueue(int element) throws InterruptedException {
synchronized(queue){
while(queue.size() == capacity) queue.wait();
queue.add(element);
queue.notifyAll();
}
}
public int dequeue() throws InterruptedException {
synchronized(queue){
while(queue.size() == 0) queue.wait();
int val = queue.remove();
queue.notifyAll();
return val;
}
}
public int size() {
return queue.size();
}
}
Here, we are making the process wait for the same situations and it proceeds ahead only when notified by another process to do so. The only difference is that we are using the synchronized
keyword here in Approach 2, but in Approach 1 we are not.
The observation is that Approach 1 is taking up significantly more amount of runtime than Approach 2. Why is that so? Isn't the underlying logic of both the approaches exactly the same? Why is it taking more runtime for Approach 1 when compared to Approach 2?
Any help would be highly appreciated.