I have a C#/.NET application that, in multiple threads (i.e. concurrently, 6 threads), tries to perform following update:
UPDATE cWrh SET
[Options] = cStg.[Options] -- byte array
FROM
[wrh].[Cars] cWrh
INNER JOIN [stg].[Cars] cStg ON
cWrh.[Id] = cStg.[Id]
AND cWrh.[Manufacturer_Id] = @ManufactuerId -- each thread gets different id here
AND cWrh.[Options] <> cStg.[Options]
This is piece of code is run in the transaction. Both tables have 3+mio records. Clustered key is on the [Id] field, there are some non-clustered indexes as well.
Funny thing is that I manually checked and in my particular example with 3+mio records cWrh.[Options] and cStg.[Options] are always the same so in the end the update would not be necessary.
I'm attaching the deadlog graph, redacted values say DB.wrh.Cars: Deadlock graph
Yes, in this particular example concurrency is not really adding any value, but this is the "Reset" query; the "Recalculate" query which does some [Options] calculation in C#, bulk inserts back into SQL and updates later in a concurrent mode speeds up the thing significantly.
I would just like to stick to this concurrent approach regardless of the task (simple reset vs CPU intensive work) if possible.
Any suggestion how to work around the deadlock is appreciated.