Have a job which does the following 2 tasks
- Read up to 300 unique
customerId
from database into aList
. - Then Call a stored procedure for each
customerId
, which execute queries in the SP, creates a XML (up to of 10 KB) and store the XML into a database table.
So, in this case there should be 300 records in the table.
On an average, the SP takes around 3 secs to process each customer until it's xml creation. So that means, it's taking total 15 minutes to completely process all 300 customers. The problem is, in future it may be even more time-consuming.
I don't want to go with bulk-insert option by having logic of xml creation in application. Using bulk-insert, I won't be able to know which customerId's data was a problem if xml creation failed. So I want to call the SP for each customer.
To process all customer in parallel, I created 4 dedicated threads, each processing a collection of unique customerId
, all 4 threads together processed all 300 customers in 5 minutes. Which I was expecting.
However I want to use ThreadPool
rather than creating my own threads.
I want to have 2 type of threads here. One type of is process and create xml for each customer, another to work on the customers for XML is already created. This another thread will call a SP which would update a flag on a customer table based on customer's XML available.
So which is the best way to process 300 customers in parallel and quickly and also updating customer table in parallel or on a separate thread?
Is dedicated thread still good option here or Parallel.ForEach
or await Task.WhenAll
?
I know Parallel.Foreach
will block the main thread, which I want to use for updating customer table.