I am working on a small library which uses the Task Parallel Library to run parallel searches for a solution. The current design works along these lines:
- a ConcurrentQueue receives the result of Searches,
- a main Task works as a loop, operating as a background thread. When a new Solution arrives to the Queue, it dequeues and processes it, and then launches a new Search on a new Task,
- a Search is launched in its own Task, and returns its result to the Queue once complete.
[Edit based on Eric J's answer: the activities involved are entirely CPU bound, there is no IO involved]
The framework works great as is for now. However, I have complete control over the number of Search tasks which will be triggered, and my understanding is that while the TPL handles the situation very well for the moment, shoving a large number of searches at the system will not result in increased parallelism because it will be bound by the number of cores available on the system, and will become counter-productive after a certain level.
My question is the following: can I "help" the TPL by limiting the number of Search tasks that will be run, and if yes, how would I go about determining what that upper-limit should be? Would it be appropriate to limit it based for instance on System.Environment.ProcessorCount?