1

When we talk about the processing of asynchronous events using an Executors service, why does creating a new fixed thread pool, involve the use of LinkedBlockingQueue ? The events which are arriving are not dependent at all, so why use a queue because the consumer thread would still involve the contention for take lock? Why doens't the Executors class have some hybrid data structure(such as a concurrent Map implementation) where there is no need for a take lock in most of the cases ?

nogard
  • 9,432
  • 6
  • 33
  • 53
userx
  • 3,713
  • 5
  • 32
  • 38
  • 1
    I'd assume it's because of three things: a) it's a blocking data structure; b) this queue type is unbounded; c) FIFO order is guaranteed. – Andrew Logvinov Apr 28 '13 at 08:21
  • @AndrewLogvinov - Hi,thanks for your response, the question is not why do we have one of the methods having LBQ, rather the question is why do we not have have another flexible method where instead of LBQ there can be a hybrid DS which implements map ? – userx Apr 28 '13 at 08:25
  • If using a Map, what would be the key and what - the value? – Alexei Kaigorodov Apr 28 '13 at 10:53

1 Answers1

1

There is very good reason what thread pool executor works with BlockingQueue (btw, you are not obliged to use LinkedBlockingQueue implementation, you can use different implementations of the BlockingQueue). The queue should be blocking in order to suspend worker threads when there are no tasks to execute. This blocking is done using wait on condition variables, so waiting worker threads do not consume any CPU resources when queue is empty.

If you use non-blocking queue in the thread pool, then how would worker threads poll for tasks to execute? They would have to implement some kind of polling, which is unnecessary wasting of CPU resources (it will be "busy waiting").

UPDATE:

Ok, now I fully understood the use case. Still you need blocking collection anyway. The reason is basically the same - since you implement Producer-Consumer you should have means for worker threads to wait for messages to arrive - and this you simply can't do without mutex + condition variable (or simply BlockingQueue).

Regarding map - yes, I understand how you want to use it, but unfortunately there is no such implementation provided. Recently I solved the similar problem: I needed to group incoming tasks by some criteria and execute tasks from each group serially. As a result I implemented my own GroupThreadPoolExecutor that does this grouping. The idea is simple: group incoming tasks into map and then add them to the executor queue when previous task from the group completes.

There is big discussion here - I think it's relevant to your question.

Community
  • 1
  • 1
nogard
  • 9,432
  • 6
  • 33
  • 53
  • @ nogard - Imagine there is some cyz implementation of a concurrent hashmap where in the events arrive with a pk which is not a +1 thing. So let's say the first event object arrived with a number 5000 with a unique identifier as 1. The second event object arrived with a number 10000 with a unique identifier as 1. Now based on hashing if event 5000 and event 10000 may get stored in different parts of an array. Ideally if 2 different threads are polling in different parts of the array, there is no need for a take lock! – userx Apr 28 '13 at 12:42
  • Well thanks for pointing me to the link and your answer as well. Grouping the tasks look fine (although I am yet to think deeply), But I don't want to end up in a situation where I am coupling a particular group to a particular thread at all. Let's say that If in a contract 'x'whose events 1,2,3..etc arrive in order, I want them to be processed in parallel by three threads, but deliver them to the client in order of arrival. I was hoping to sort the output of those threads and then put them in the final queue for sending. – userx Apr 28 '13 at 16:58