5

I have an interesting exercise to solve from my professor. But I need a little bit of help so it does not become boring during the holidays.

The exercise is to

  • create a multithreaded load balancer, that reads 1 measuring point from 5 sensors every second. (therefore 5 values every second).
  • Then do some "complex" calculations with those values.
  • Printing results of the calculations on the screen. (like max value or average value of sensor 1-5 and so on, of course multithreaded)
  • As an additional task I also have to ensure that if in the future for example 500 sensors would be read every second the computer doesn't quit the job.(load balancing).

I have a csv textfile with ~400 measuring points from 5 imaginary sensors.

What I think I have to do:

  1. Read the measuring points into an array
  2. Ensure thread safe access to that array
  3. Spawn a new thread for every value that calculates some math stuff
  4. Set a max value for maximum concurrent working threads

I am new to multithreading applications in c# but I think using threadpool is the right way. I am currently working on a queue and maybe starting it inside a task so it wont block the application.

What would you recommend?

Ryan Emerle
  • 15,461
  • 8
  • 52
  • 69
michael_j
  • 79
  • 1
  • 4

1 Answers1

3

There are a couple of environment dependencies here:

  • What version of .NET are you using?
  • What UI are you using - desktop (WPF/WinForms) or ASP.NET?

Let's assume that it's .NET 4.0 or higher and a desktop app.

Reading the sensors

In a WPF or WinForms application, I would use a single BackgroundWorker to read data from the sensors. 500 reads per second is trivial - even 500,00 is usually trivial. And the BackgroundWorker type is specifically designed for interacting with desktop apps, for example handing-off results to the UI without worrying about thread interactions.

Processing the calculations

Then you need to process the "complex" calculations. This depends on how long-lived these calculations are. If we assume they're short-lived (say less than 1 second each), then I think using the TaskScheduler and the standard ThreadPool will be fine. So you create a Task for each calculation, and then let the TaskScheduler take care of allocating tasks to threads.

The job of the TaskScheduler is to load-balance the work by queuing lightweight tasks to more heavyweight threads, and managing the ThreadPool to best balance the workload vs the number of cores on the machine. You can even override the default TaskScheduler to schedule tasks in whatever manner you want.

The ThreadPool is a FIFO queue of work items that need to be processed. In .NET 4.0, the ThreadPool has improved performance by making the work queue a thread-safe ConcurrentQueue collection.

Measuring task throughput and efficiency

You can use PerformanceCounter to measure both CPU and memory usage. This will give you a good idea of whether the cores and memory are being used efficiently. The task throughput is simply measured by looking at the rate at which tasks are being processed and supplying results.

Note that I haven't included any code here, as I assume you want to deal with the implementation details for your professor :-)

Community
  • 1
  • 1
HTTP 410
  • 17,300
  • 12
  • 76
  • 127
  • hi, thanks for your help. its an proof of concept i can choose whatever i want. so c# .net 4 winforms is the thing i am working now with.... – michael_j Dec 25 '15 at 23:19
  • i think i didnt explained it right........the amount of reads per second is not important, the important thing is that every sensor will be read at least once per second. so once per second is ok....maybe better before killing the system with tasks – michael_j Dec 25 '15 at 23:30
  • Reading all of the sensors and updating the UI can be done on a single thread - using the Backgroundworker in this example. It's only the "complex" processing that needs to be done in a parallel (multi-threaded) way. – HTTP 410 Dec 26 '15 at 02:14
  • @michael_j You should also take a look at "Task Parallel Library". That could be done using Partitioning the array and do a parallel processing on Array. But again the usage would depends on your requirements. But its a good place to look and start with. – vendettamit Dec 28 '15 at 18:43