0

In event processing a function puts values into a collection and another removes from the same collection. The items should be placed inside the collection in the order they received from the source (sockets) and read in the same way or else the results will change.

Queue is the collection most people recommend but at the same time, is the queue blocked when an item is being added and hence the other function has to wait until the adding is completed making it inefficient and the operational latency increases over time.

For example, one thread reads from a queue and another writes to the same queue. Either one operation performs at a time on queue until it releases a lock. Is there any data structure that avoids this.

user64287
  • 53
  • 1
  • 8
  • `Queue` isn't a collection, it's an interface, with several implementations. Pick the one that suits you. – user207421 Oct 15 '16 at 09:41
  • My intent is to ask whether any implementation of collections framework can be used with out having the functions get and put blocking the collection. – user64287 Oct 15 '16 at 09:48
  • And that question is answered by looking up the Javadoc of all the collections that implement Queue. Not by making a series of incorrect statements about the `Queue` interface. – user207421 Oct 16 '16 at 01:14

2 Answers2

3

ConcurrentLinkedQueue is one of the examples. Please see other classes from java.util.concurrent.

There are even more performant third party libraries for specific cases, e.g. LMAX Disruptor

Ivan
  • 3,781
  • 16
  • 20
2

In fact, the LinkedBlockingQueue is the easiest to use in many cases because of its blocking put and take methods, which wait until there's an item to take, or space for another item to insert in case an upper size limit named capacity has been activated. Setting a capacity is optional, and without one, the queue can grow indefinitely.

The ArrayBlockingQueue, on the other hand, is the most efficient and beautiful of them, it internally uses a ring buffer and therefore must have an fixed capacity. It is way faster than the LinkedBlockingQueue, yet far from the maximum throughput you can achieve with a disruptor :)

In both cases, blocking is purely optional on both sides. The non-blocking API of all concurrent queues is also supported. The blocking and non-blocking APIs can be mixed.

In many cases, the queue is not the bottleneck, and when it really is, using a disruptor is often the sensible thing to do. It is not a queue but a ring buffer shared between participating threads with different roles, i.e. typically one producer, n workers, and one consumer. A bit more cumbersome to set up but speeds around 100 million transactions per second are possible on modern hardware because it does not require expensive volatile variables but relies on more subtle ways of serialising reads and writes that are machine dependent (you basically need to write parts of such a thing in assembler) :)

yeoman
  • 1,671
  • 12
  • 15
  • That makes it undesirable for stream processing, isn't it. We are not using several consumers and producer, 1 producer and 1 consumer and if it's blocking then isn't it obvious over time the latency to introduce it to processing increases. – user64287 Oct 15 '16 at 09:46
  • The blocking is not about several producers and consumers, but using the blocking take() on the consumer side is practical because you don't need to poll / sleep / waste time or manually call Object.wait on the queue and have the producer notify you via Object.notify – yeoman Oct 15 '16 at 09:53
  • Blocking is optional on both sides. It's the ideal choice whenever there's no other work to do for a thread while it waits for input or for additional space to be freed up in the queue. – yeoman Oct 15 '16 at 09:54