64
new SynchronousQueue()
new LinkedBlockingQueue(1)

What is the difference? When I should use SynchronousQueue against LinkedBlockingQueue with capacity 1?

Nagaraj Tantri
  • 5,172
  • 12
  • 54
  • 78
Anton
  • 5,831
  • 3
  • 35
  • 45

6 Answers6

64

the SynchronousQueue is more of a handoff, whereas the LinkedBlockingQueue just allows a single element. The difference being that the put() call to a SynchronousQueue will not return until there is a corresponding take() call, but with a LinkedBlockingQueue of size 1, the put() call (to an empty queue) will return immediately.

I can't say that i have ever used the SynchronousQueue directly myself, but it is the default BlockingQueue used for the Executors.newCachedThreadPool() methods. It's essentially the BlockingQueue implementation for when you don't really want a queue (you don't want to maintain any pending data).

jtahlborn
  • 52,909
  • 5
  • 76
  • 118
  • Ok, so the main idea is that, it block reading thread until result is ready and block writing thread until reading thread is ready to read. Could you please provide real-life example when it could be usefull – Anton Dec 21 '11 at 14:58
  • 13
    @Umar When several threads produce objects for the queue faster than consumers can consume and process them - a queue can overgrow in size. SynchronousQueue helps to control communication without any specific code in producers. In real life it's similar to a meeting where one person answers questions asked by others. Consider SynchronousQueue as a kind of secretary. – andrey Dec 21 '11 at 15:24
  • I do use SyncQ quite a bit, it's a good handoff abstract and relatively good impl. (it has allocation when waiting) – bestsss Dec 21 '11 at 18:51
  • 1
    The case where I've used SynchronousQueue is in "pipelining" scenarios. Let's say that you have a processing pipeline of stages where some data block gets handed down the pipeline starting with a "producer" and ending up with a "consumer". Assuming all of the stages are somewhat deterministic, having an actual queue is overkill. All you need is a handoff between the stages. This is important if the data blocks are large, because you don't want to create too many of them. This is analogous to the old "double buffering" strategies. – Wheezil Jul 12 '15 at 01:43
  • 1
    A really concrete example is database loader. Suppose you want to scan a delimited text file and load into a database. You have two stages -- one that scans the file and produces blocks of "records" for insertion (the record block could be a two-dimensional array of Objects), and one that calls JDBC for record insertion, each block getting its own transaction/batch. These things overlap very nicely. – Wheezil Jul 12 '15 at 01:50
  • @Wheezil - it sounds like the processing you are describing is basically single threaded. what's the point of the queues and multiple threads then? – jtahlborn Jul 12 '15 at 03:16
  • 2
    If you break an otherwise single-threaded operation into stages, and pipeline the stages together with work-item handed off between them, and run each stage in its own thread, you will get concurrency from the overlap. In this case "scan CSV" and "insert records" are the two stages, and they can proceed concurrently to saturate the database insert. – Wheezil Jul 12 '15 at 22:48
  • @Wheezil - ah, now i follow. that's a nice application of the idea. – jtahlborn Jul 13 '15 at 14:48
10

As far as I understand code above do the same things.

No, the code is not the same at all.

Sync.Q. requires to have waiter(s) for offer to succeed. LBQ will keep the item and offer will finish immediately even if there is no waiter.

SyncQ is useful for tasks handoff. Imagine you have a list w/ pending task and 3 threads available waiting on the queue, try offer() with 1/4 of the list if not accepted the thread can run the task on its own. [the last 1/4 should be handled by the current thread, if you wonder why 1/4 and not 1/3]

Think of trying to hand the task to a worker, if none is available you have an option to execute the task on your own (or throw an exception). On the contrary w/ LBQ, leaving the task in the queue doesn't guarantee any execution.

Note: the case w/ consumers and publishers is the same, i.e. the publisher may block and wait for consumers but after offer or poll returns, it ensures the task/element is to be handled.

bestsss
  • 11,796
  • 3
  • 53
  • 63
8

One reason to use SynchronousQueue is to improve application performance. If you must have a hand-off between threads, you will need some synchronization object. If you can satisfy the conditions required for its use, SynchronousQueue is the fastest synchronization object I have found. Others agree. See: Implementation of BlockingQueue: What are the differences between SynchronousQueue and LinkedBlockingQueue

Community
  • 1
  • 1
snadata
  • 81
  • 1
  • 1
4

[Just trying to put it in (possibly) more clearer words.]

I believe the SynchronousQueue API docs states things very clearly:

  1. A blocking queue in which each insert operation must wait for a corresponding remove operation by another thread, and vice versa.
  2. A synchronous queue does not have any internal capacity, not even a capacity of one. You cannot peek at a synchronous queue because an element is only present when you try to remove it; you cannot insert an element (using any method) unless another thread is trying to remove it; you cannot iterate as there is nothing to iterate.
  3. The head of the queue is the element that the first queued inserting thread is trying to add to the queue; if there is no such queued thread then no element is available for removal and poll() will return null.

And BlockingQueue API docs:

  1. A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element.

So the difference is obvious and somewhat critically subtle, especially the 3rd point below:

  1. If the queue is empty when you are retrieving from BlockingQueue, the operation block till the new element is inserted. Also, if the queue is full when you are inserting in the BlockingQueue, the operation will block till the element is removed from the queue and a space is made for the new queue. However note that in SynchronousQueue, as operation is blocked for opposite operation (insert and remove are opposite of each other) to occur on another thread. So, unlike BlockingQueue, the blocking depends on the existence of the operation, instead of existence or non existence of an element.
  2. As the blocking is dependent on existence of opposite operation, the element never really gets inserted in the queue. Thats why the second point: "A synchronous queue does not have any internal capacity, not even a capacity of one."
  3. As a consequence, peek() always returns null (again, check the API doc) and iterator() returns an empty iterator in which hasNext() always returns false. (API doc). However, note that the poll() method neatly retrieves and removes the head of this queue, if another thread is currently making an element available and if no such thread exists, it returns null. (API doc)

Finally, a small note, both SynchronousQueue and LinkedBlockingQueue classes implement BlockingQueue interface.

MsA
  • 2,599
  • 3
  • 22
  • 47
1

SynchronousQueue works in a similar fashion with following major differences: 1) The size of SynchronousQueue is 0 2) put() method will only insert an element if take() method will be able to fetch that element from the queue at the same moment i.e. an element cannot be inserted if the consumer take() call is going to take some time to consume it.

SynchronousQueue - Insert only when someone will receive it at that moment itself.

0

Synchronous queues are basically used for handoff purposes. They do not have any capacity and a put operation is blocked until some other thread performs get operation.

If we want to safely share a variable between two threads, we can put that variable in synchrounous queue and let other thread take it from the queue.

Code Sample from https://www.baeldung.com/java-synchronous-queue

    ExecutorService executor = Executors.newFixedThreadPool(2);
    SynchronousQueue<Integer> queue = new SynchronousQueue<>();
Runnable producer = () -> {
    Integer producedElement = ThreadLocalRandom
      .current()
      .nextInt();
    try {
        queue.put(producedElement);
    } catch (InterruptedException ex) {
        ex.printStackTrace();
    }
};

Runnable consumer = () -> {
    try {
        Integer consumedElement = queue.take();
    } catch (InterruptedException ex) {
        ex.printStackTrace();
    }
};

executor.execute(producer);
executor.execute(consumer);

executor.awaitTermination(500, TimeUnit.MILLISECONDS);
executor.shutdown();
assertEquals(queue.size(), 0);

They are also used in CachedThreadPool to achieve an effect of unlimited(Integer.MAX) thread creation as tasks arrive. CachedPool has coreSize as 0 and maxPoolSize as Integer.MAX with synchronous queue

As tasks arrive onto queue, other tasks are blocked until the first one is fetched out. Since it does not have any queue capacity, thread pool will create one thread and this thread will take out task allowing more tasks to be put onto the queue. This will continue until thread creation reaches maxPoolSize. Based on timeOut, idle threads maybe terminated and new ones are created without crossing the maxPoolSize.

Gautam Tadigoppula
  • 932
  • 11
  • 13