My approach for this kind of locking would be to use a distributed lock, rather than locking the target table itself. If anything, it would allow you to include more logic within the process. You can't use the standard c# lock method because you've got multiple instances of the same application that would need to share the same lock, so you'd have to use something centralised.
I've used SQL Server's sp_getapplock for this before - you just need a single SQL database that is accessible by each instance of the application. You can either call this SP manually, or I know there are a few nuget packages that wrap this process up to make it a bit easier to integrate with. But you've also got the option of doing the same kind of thing with distributed locks in Redis, and probably most other cache solutions, if you've got any of those involved in your solution.
Essentially you can just take a new lock around whatever action you want to protect from being run concurrently. In your case, it would avoid multiple updates to that table running at the same time.
That being said, you could also consider looking at how these events are raised and handled, and seeing if you could get the resiliency you're looking for at that level instead. Just off the top of my head, something like RabbitMQ (and other message queues) allows for a lot of the failure situations to be handled at the message delivery level. I don't know if that would be worth looking into as well (if anything, it might avoid these events from being processed multiple times if they only need to be processed once), but it might be helpful to check that.