9

I am using Parallel.ForEach to do work on multiple threads, using a new EF5 DbContext for each iteration, all wrapped within a TransactionScope, as follows:

using (var transaction = new TransactionScope())
{
    int[] supplierIds;

    using (var appContext = new AppContext())
    {
        supplierIds = appContext.Suppliers.Select(s => s.Id).ToArray();
    }

    Parallel.ForEach(
        supplierIds,
        supplierId =>
    {
        using (var appContext = new AppContext())
        {
            Do some work...

            appContext.SaveChanges();                
        }
    });

    transaction.Complete();
}

After running for a few minutes it is throwing an EntityException "The underlying provider failed on Open" with the following inner detail:

"The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions."

Does anyone know what's causing this or how it can be prevented? Thanks.

Matthew Sharpe
  • 3,667
  • 2
  • 25
  • 24

3 Answers3

6

You could also try setting the maximum number of concurrent tasks in the Parallel.ForEach() method using new ParallelOptions { MaxDegreeOfParallelism = 8 } (replace 8 with the whatever you want to limit it to.

See MSDN for more details

svick
  • 236,525
  • 50
  • 385
  • 514
bmdixon
  • 352
  • 8
  • 17
  • That doesn't affect anything in the default TPL task scheduler. – ta.speot.is Feb 14 '13 at 12:01
  • 1
    @ta.speot.is Not sure what you mean. `Parallel.ForEach()` doesn't create one `Task` for each item in the collection, it reuses `Task`s. And if you set `MaxDegreeOfParallelism`, it will limit the number of `Task`s used and thus the actual degree of parallelism. – svick Feb 14 '13 at 15:20
  • There seems to be a problem with unexpected contentions in parallel operations in Entity Framework. I have also experienced this problem, it has also been discussed here: http://stackoverflow.com/q/13182603/141172 – xpereta Apr 08 '13 at 15:32
1

You should also find out why your app is taking such huge amounts of locks? You have wrapped a TransactionScope around multiple db connections. This probably causes a distributed transaction which might have to do with it. It certainly causes locks to never be released until the very end. Change that.

You can only turn up locking limits so far. It does not scale to arbitrary amounts of supplier ids. You need to find the cause for the locks, not mitigate the symptoms.

usr
  • 168,620
  • 35
  • 240
  • 369
  • Indeed, I do not understand why this is requiring a load of locks. – Matthew Sharpe Feb 14 '13 at 10:59
  • 1
    My suspicion is that you have 100s of connections going on at the same time (you understand why?) each having a "normal" amount of locks. The fix is to move the transaction scope into the parallel for body. Try removing it for testing purposes. – usr Feb 14 '13 at 11:05
0

You are running into the maximum number of locks allowed by sql server - which by default is set automatically and governed by available memory.

You can

  1. Set it manually - I forget exactly how but google is your friend.
  2. Add more memory to your sql server
  3. Commit your transactions more frequently.
Matt Randle
  • 1,885
  • 2
  • 17
  • 22
  • Forgive my ignorance but I don't understand why the procedure requires many locks? Thank you for your help. – Matthew Sharpe Feb 14 '13 at 10:59
  • As usr mentions, creating a context within a transaction scope for each insert will result in a large number of connections so its probably that. Does it crash if you restart your sql instance before running it ? – Matt Randle Feb 14 '13 at 11:03
  • Thinking about it some more, its almost certainly that. With a single connection and lots of inserts it would escalate the locks. With multiple connections, it cant do that. – Matt Randle Feb 14 '13 at 11:07
  • Update: Unfortunately the error is occurring even without a TransactionScope in place. This somewhat undermines my understanding of the issue. – Matthew Sharpe Feb 14 '13 at 11:43
  • Sorry but Im not 100% clear what it is then. Maybe something to do with it not disposing the context quick enough as its looping through ? do you get the same with a standard for loop ? – Matt Randle Feb 14 '13 at 12:00
  • FYI the issue was resolved using bmdixon's solution of limiting the degree of parallelism. As ta.speot.is suggested, I suspect it was spawning loads of threads hence loads of connections. – Matthew Sharpe Feb 14 '13 at 13:22