6

Problem background

An event can have multiple subscribers (i.e. multiple handlers may be called when an event is raised). Since any one of the handlers could throw an error, and that would prevent the rest of them from being called, I want to ignore any errors thrown from each individual handler. In other words, I do not want an error in one handler to disrupt the execution of other handlers in the invocation list, since neither those other handlers nor the event publisher has any control over what any particular event handler's code does.

This can be accomplished easily with code like this:

public event EventHandler MyEvent;
public void RaiseEventSafely( object sender, EventArgs e )
{
    foreach(EventHandlerType handler in MyEvent.GetInvocationList())
        try {handler( sender, e );}catch{}
}


A generic, thread-safe, error-free solution

Of course, I don't want to write all this generic code over and over every time I call an event, so I wanted to encapsulate it in a generic class. Furthermore, I'd actually need additional code to ensure thread-safety so that MyEvent's invocation list does not change while the list of methods is being executed.

I decided to implement this as a generic class where the generic type is constrained by the "where" clause to be a Delegate. I really wanted the constraint to be "delegate" or "event", but those are not valid, so using Delegate as a base class constraint is the best I can do. I then create a lock object and lock it in a public event's add and remove methods, which alter a private delegate variable called "event_handlers".

public class SafeEventHandler<EventType> where EventType:Delegate
{
    private object collection_lock = new object();
    private EventType event_handlers;

    public SafeEventHandler(){}

    public event EventType Handlers
    {
        add {lock(collection_lock){event_handlers += value;}}
        remove {lock(collection_lock){event_handlers -= value;}}
    }

    public void RaiseEventSafely( EventType event_delegate, object[] args )
    {
        lock (collection_lock)
            foreach (Delegate handler in event_delegate.GetInvocationList())
                try {handler.DynamicInvoke( args );}catch{}
    }
}


Compiler issue with += operator, but two easy workarounds

One problem ran into is that the line "event_handlers += value;" results in the compiler error "Operator '+=' cannot be applied to types 'EventType' and 'EventType'". Even though EventType is constrained to be a Delegate type, it will not allow the += operator on it.

As a workaround, I just added the event keyword to "event_handlers", so the definition looks like this "private event EventType event_handlers;", and that compiles fine. But I also figured that since the "event" keyword can generate code to handle this, that I should be able to as well, so I eventually changed it to this to avoid the compiler's inability to recognize that '+=' SHOULD apply to a generic type constrained to be a Delegate. The private variable "event_handlers" is now typed as Delegate instead of the generic EventType, and the add/remove methods follow this pattern event_handlers = MulticastDelegate.Combine( event_handlers, value );


The final code looks like this:

public class SafeEventHandler<EventType> where EventType:Delegate
{
    private object collection_lock = new object();
    private Delegate event_handlers;

    public SafeEventHandler(){}

    public event EventType Handlers
    {
        add {lock(collection_lock){event_handlers = Delegate.Combine( event_handlers, value );}}
        remove {lock(collection_lock){event_handlers = Delegate.Remove( event_handlers, value );}}
    }

    public void RaiseEventSafely( EventType event_delegate, object[] args )
    {
        lock (collection_lock)
            foreach (Delegate handler in event_delegate.GetInvocationList())
                try {handler.DynamicInvoke( args );}catch{}
    }
}


The Question

My question is... does this appear to do the job well? Is there a better way or is this basically the way it must be done? I think I've exhausted all the options. Using a lock in the add/remove methods of a public event (backed by a private delegate) and also using the same lock while executing the invocation list is the only way I can see to make the invocation list thread-safe, while also ensuring errors thrown by handlers don't interfere with the invocation of other handlers.

Triynko
  • 18,766
  • 21
  • 107
  • 173
  • 9
    All of your examples have empty catch clauses! That means, IMO, all of your solutions are equally broken. – Kirk Woll Jun 28 '11 at 21:26
  • Fine code, only one optimalization wouldnt be bad, that is to use ReaderWriterLockSlim. One problem i find with your code is that the eventHandler can't be raised multiple times at the same time without having to wait for the first one to finish which can have bad consequences – Polity Jun 28 '11 at 21:30
  • 1
    @Kirk Woll, Exactly, i would recommend throwing an aggregateException at the end (if required) – Polity Jun 28 '11 at 21:31
  • @Kirk Woll & @Polity: Wrong. Since the publisher doesn't care whether the event is handled at all, it doesn't care what the event handler does, and so it doesn't care if it throws an exception, bakes a cake, or does nothing at all. The empty catch clause is actually the correct behavior, and the publisher should not propagate an error or an aggregate error to the application. If the error is to be handled, AND IT SHOULD BE, the handling MUST occur in the event handler itself. Feel free to debate it; I just don't want you to think I haven't thought this through. – Triynko Jun 28 '11 at 21:55
  • 1
    @Triynko - If the exceptions MUST (in all caps) be handled by the event handler, then why are you swallowing the exceptions? They must not happen at all. If that is the design requriement (and it is a reasonable one), then the application should die a quick death when that requirement is not satisfied, rather than hiding the errors. – Jeffrey L Whitledge Jun 28 '11 at 22:04
  • Since the publisher doesn't know what sorts of things the event handlers are doing, it has no idea whether swallowing a given exception is a sane and reasonable way of proceeding. It is better to force the handlers to decide for themselves how their exceptions are to be delt with. And if any fail to do so, then the application must die. – Jeffrey L Whitledge Jun 28 '11 at 22:09
  • @Jeffrey: The emphasis should have been on BY THE EVENT HANDLER. Anyway... I already explained why I'm swallowing the exceptions, because they are not the publisher's responsibility and they are not fatal errors in context of the application. Let me put in perspective for you. The same way that the operating system doesn't care if one program crashes, a single crashing application should not result in a blue screen. Other apps, and the OS, just like other handlers and the publisher, don't care what another handler does. It's the handler's responsibility to handle its own errors. – Triynko Jun 28 '11 at 22:32
  • @Jeffrey: "it has no idea whether swallowing a given exception is a sane and reasonable way of proceeding". Exactly, but then it also has no idea whether CRASHING THE APP is sane and reasonable way of proceeding. In fact, it is neither sane nor reasonable to propagate the error, because to the executing context, the error is inconsequential and its nature unknown and therefore must be ignored BY THE PRODUCE. It should be handled by the event handler however. Having the event publisher do anything but ignore it would be a dogmatic mistake. – Triynko Jun 28 '11 at 22:35
  • 3
    @Triynko, that's the exception that proves the rule. The OS creates a cordoned-off state space called a "Process" that can do anything it wants without causing the state of the "Kernel" or unrelated processes to enter an illogical, undefined state. Inside of a process, if an event handler throws an exception, it is possible, even likely that the process state is now undefined. To continue operating after (without any indication since you swallow silently) that is to throw away any hope of consistency or correctness in the face of error. – David Gladfelter Jun 28 '11 at 22:37
  • 4
    David is correct. **Event handlers are not processes.** You cannot ignore a fault in an event handler and assume that it isn't going to affect everything else in the process. If you actually are in the situation where you cannot control what code is doing in your event handlers then the right thing to do is to isolate their behaviour to either their own appdomains, or even better, their own processes. The "MEF" and "MAF" tools can help you here; they are designed to handle situations where third-party add-ins need to be isolated from mainline application code. – Eric Lippert Jun 28 '11 at 22:42
  • 4
    You correctly assert several times that the event handler is responsible for handling the exception. There is therefore no point whatsoever in catching an exception that should have been handled. *Never* write code like this, it allows event handlers to get sloppy and that's disastrous. – Hans Passant Jun 28 '11 at 22:42
  • @David - Wrong. I can run an entire OS in a single process through virtualization. When an operating system's app crashes, the entire OS does not crash, because the app is a logically cordoned-off execution context. Similarly, an event publisher doesn't care whether an event is handled at all, let alone how. By ignoring the error, I am creating a logically cordoned-off execution context. Do you not realize how incredibly incorrect it would be to allow such an error through? It would certainly have to crash the application, because the publisher could not possibly have a legit recovery. – Triynko Jun 28 '11 at 22:43
  • @Eric and Hans: An error in an event handler is not going to affect everything else in the process. If the operating system or the process itself enters an inconsistant state or some component crashes and becomes unavailable, it would be because such a service or system was written with the dogma you're clinging to in mind. Context is everything, and as far as the event publisher is concerned, event handlers are inconsequential. The events are simply notifications. – Triynko Jun 28 '11 at 22:50
  • ...continued... If a part of the app fails to respond to a notification, the consequences of that will become apparent in other ways soon enough. For example, the app will fail to function properly as a result of an unhandled error, but that is the desired behavior. Crashing IS NOT A SOLUTION, even if it's the lesser of two evils, and it's not even so. The solution is not to crash the application, it is to fix the event handler to handle it's own errors. – Triynko Jun 28 '11 at 22:50
  • That would be true if there were no shared resources between elements of the composite application that you're talking about. But there _are_. If an event handler leaks an unmanaged write-locked handle to a file resource as a result of throwing that exception, then any code that executes subsequently expecting that file to be available for writing will now fail, and there will be no indication of what caused those write operations to fail, and the wrong component could be implicated. Try debugging that error from a customer-submitted event log! Better error information is always better. – David Gladfelter Jun 28 '11 at 22:54
  • @Hans: Yes, I assert the handler is responsible for handling its own errors. The consequences of it not doing so, do not necessarily have to be "application crash". If the publisher throws an error, the error comes from an unknown context (an optional, one-of-many, arbitrary receiver of a notification), therefore it is an unhandlable error. Crashing the application is drastic. You assert that allowing event handlers to get sloppy is disasterous, but that's pure speculation. I'm sure you agree that the event handler should handle its own errors, not pass them on to the notification system. – Triynko Jun 28 '11 at 22:57
  • 3
    To be clear, there can be reasons to not fail in the face of unexpected exceptions. I work on a project where customers lose a lot of time and money if the program crashes, the program thoroughly logs its state as it is running, and there are hardware interlocks to prevent anything 'bad' from happening in the real world. The customer would prefer to review an error offline and decide whether to invalidate results rather than fail fast and lose a lot of time and money. On the other hand, you're not even logging these failures! – David Gladfelter Jun 28 '11 at 22:59
  • @David. Reporting or logging the error could actually be very helpful; but what I'm really getting at is that having publisher try to "handle" or "rethrow" the error, would be incorrect. Crashing the app is not a solution. If the app is coded wrong and an event handler doesn't handle its own errors, it will function incorrectly, and that will become apparent to the user. The handler code should be fixed. It's funny that you have to mention "unmanaged" resources; that's exactly why we should write purely managed apps, or we must handle the error in the appropriate context. – Triynko Jun 28 '11 at 23:04
  • So, purely managed is best, but if unmanaged must be used, the error should be handled. If it's not, the file will be inaccessible to the application. It may also be inaccessible to other applications, but that's true in general: "The file is in use by another application.". And you see, if said other application logged all its errors, debugging would be EASY. But do you think that MS Word, for example, logs all its errors? NO WAY, so it's their own fault if they have a hard time debugging. I think I will definitely just HANDLE MY ERRORS in event handlers, and my apps will not crash. – Triynko Jun 28 '11 at 23:09
  • "Having the event publisher do anything but ignore it would be a dogmatic mistake." This is a very confusing statement. On the one hand, it is correct. But on the other hand it is in service to an argument for exactly the opposite. You are not ignoring the errors. You are handling them. Swallowing errors is not ignoring them. – Jeffrey L Whitledge Jun 29 '11 at 01:08
  • "For example, the app will fail to function properly as a result of an unhandled error, but that is the desired behavior." Maybe it will. Maybe it won't. If you are lucky enough to find out about the error before it does real damage, you are still going to have extra trouble figuring out what the error was, since you are taking the very helpful diagnostic information in the Exception object, and throwing it away. Better to log the exception and crash the app. Then you know what is happening and where. – Jeffrey L Whitledge Jun 29 '11 at 01:11
  • Let me see if I can put this into perspective for you... If your event handlers are so very unimportant that they can fail in unknown ways and those failures do not need to be logged or acknowledged in any way, or even known about, then why are you calling those pointless event handlers in the first place. That code should be deleted as useless bloat. In any code that I write, I need to know as soon as possible if it's failing. – Jeffrey L Whitledge Jun 29 '11 at 01:13
  • 3
    "You assert that allowing event handlers to get sloppy is disasterous, but that's pure speculation." No, that is experience. I spent many years maintaining applications that swallowed errors all over the place without reporting them, and I will be damned if I ever have to deal with that mess again. – Jeffrey L Whitledge Jun 29 '11 at 01:21
  • @Jeffrey: I've been building software since I was five. I know ignoring errors is bad. I'm long over it. What I'm saying is that from the perspective of code that tries to notify a dozen listeners that an event has occurred, if one listener cries about it, the rest shouldn't be left in the dark and never notified about the event. The C# MulticastDelegate does this. I'm not ignoring the errors, I'm isolating them into a list, so it doesn't disrupt the "event call" as an atomic operation. In that sense, I'm not handling them... I'm basically making them irrelevant and non-disruptive. – Triynko Jun 29 '11 at 04:41
  • While an error in one handler can have side-effects on the other handlers, if something indeed goes horribly wrong, that's true in general, even for isolated processes (suppose an app allocates all the free memory on the machine, or whatever). The point is keeping the application in a stable state, so you can see what effect, if any, an error really has (i.e. how far it reaches). It also will prevent the situation where an error in one handler that actually is inconsequential, causes other components to fail because of an unhandled exception. – Triynko Jun 29 '11 at 04:45
  • Obviously, if an unhandled errors occurs, there is a problem with the application. Such an error needs to be made very obvious and corrected, but disrupting an event chain is not necessarily the best way to do it. In this situation, I think the best thing to do is to collect the errors and return them to the caller, because the SafeEvent cannot possibly judge the severity of the errors or whether they actually would cause incorrect program operation. I'm just avoiding unnecessary crashes and interference between components. – Triynko Jun 29 '11 at 04:48
  • @DavidGladfelter: If an exception while using an object would leave it in a potentially-invalid state, rather than abruptly crashing the application (which could leave external resources in an even worse state) wouldn't it be better to have the code which had been using that object expressly invalidate it? If the object turns out to be critically necessary, the application will fail pretty quickly, but if it turns out not to be needed (e.g. it's a control which had requested property-update notifications, but got disposed just as one was arriving) then keep running. – supercat Nov 13 '13 at 01:04
  • Some event handlers may NOT BE OK to crash - e.g. app state changing logic leading the app to invalid state while others who e.g. run an unsafe process should have this logic around themselves Object.Event += (s,e) => ExceptionConsumer.Execute(()=>{ ... }) - the process failure handling should be the respective event handler's responsibility. An extra event for these event handlers could be incorporated in case of custom management context - e.g Object.Event += (s,e) => Object.UNSAFEvent.Raise(s,e) and use Object.UNSAFEvent like you do - it should be regarded as unsafe not the other way around – Demetris Leptos Nov 09 '21 at 19:27

7 Answers7

18

Since any one of the handlers could throw an error, and that would prevent the rest of them from being called,

You say that like it is a bad thing. That is a very good thing. When an unhandled, unexpected exception is thrown that means that the entire process is now in an unknown, unpredictable, possibly dangerously unstable state.

Running more code at this point is likely to make things worse, not better. The safest thing to do when this happens is to detect the situation and cause a failfast that takes down the entire process without running any more code. You don't know what awful thing running more code is going to do at this point.

I want to ignore any errors thrown from each individual handler.

This is a super dangerous idea. Those exceptions are telling you that something awful is happening, and you're ignoring them.

In other words, I do not want an error in one handler to disrupt the execution of other handlers in the invocation list, since neither those other handlers nor the event publisher has any control over what any particular event handler's code does.

Whose in charge here? Someone is adding those event handlers to this event. That is the code that is responsible for ensuring that the event handlers do the right thing should there be an exceptional situation.

I then create a lock object and lock it in a public event's add and remove methods, which alter a private delegate variable called "event_handlers".

Sure, that's fine. I question the necessity of the feature -- I very rarely have a situation where multiple threads are adding event handlers to an event -- but I'll take your word for it that you are in this situation.

But in that scenario this code is very, very, very dangerous:

    lock (collection_lock)
        foreach (Delegate handler in event_delegate.GetInvocationList())
            try {handler.DynamicInvoke( args );}catch{}

Let's think about what goes wrong there.

Thread Alpha enters the collection lock.

Suppose there is another resource, foo, which is also controlled by a different lock. Thread Beta enters the foo lock in order to obtain some data that it needs.

Thread Beta then takes that data and attempts to enter the collection lock, because it wants to use the contents of foo in an event handler.

Thread Beta is now waiting on thread Alpha. Thread Alpha now calls a delegate, which decides that it wants to access foo. So it waits on thread Beta, and now we have a deadlock.

But can't we avoid this by ordering the locks? No, because the very premise of your scenario is that you don't know what the event handlers are doing! If you already know that the event handlers are well-behaved with respect to their lock ordering then you can presumably also know that they are well-behaved with respect to not throwing exceptions, and the whole problem vanishes.

OK, so let's suppose that you do this instead:

    Delegate copy;
    lock (collection_lock)
        copy = event_delegate;
    foreach (Delegate handler in copy.GetInvocationList())
        try {handler.DynamicInvoke( args );}catch{}

Delegates are immutable and copied atomically by reference, so you now know that you're going to be invoking the contents of event_delegate but not holding the lock during the invocation. Does that help?

Not really. You've traded one problem for another one:

Thread Alpha takes the lock and makes a copy of the delegate list, and leaves the lock.

Thread Beta takes the lock, removes event handler X from the list, and destroys state necessary to prevent X from deadlocking.

Thread Alpha takes over again and starts up X from the copy. Because Beta just destroyed state necessary for the correct execution of X, X deadlocks. And once more, you are deadlocked.

Event handlers are required to not do that; they are required to be robust in the face of their suddenly becoming "stale". It sounds like you are in a scenario where you cannot trust your event handlers to be well-written. That's a horrid situation to be in; you then cannot trust any code to be reliable in the process. You seem to think that there is some level of isolation you can impose on an event handler by catching all its errors and muddling through, but there is not. Event handlers are just code, and they can affect arbitrary global state in the program like any other code.


In short, your solution is generic, but it is not threadsafe and it is not error-free. Rather, it exacerbates threading problems like deadlocks and it turns off safety systems.

You simply cannot abdicate responsibility for ensuring that event handlers are correct, so don't try. Write your event handlers so that they are correct -- so that they order locks correctly and never throw unhandled exceptions.

If they are not correct and end up throwing exceptions then take down the process immediately. Don't keep muddling through trying to run code that is now living in an unstable process.

Based on your comments on other answers it looks like you think that you should be able to take candy from strangers with no ill effects. You cannot, not without a whole lot more isolation. You can't just sign up random code willy-nilly to events in your process and hope for the best. If you have stuff that is unreliable because you're running third party code in your application, you need a managed add-in framework of some sort to provide isolation. Try looking up MEF or MAF.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • **Moderator Note** _Comments on this answer have been removed because they degenerated into noise. Please keep comments constructive, on topic and friendly_ – Tim Post Jun 29 '11 at 06:28
  • @config http://stackoverflow.com/questions/835182/choosing-between-mef-and-maf-system-addin . MAF seems to be more complex and supports loading addins into AppDomains. – CodesInChaos Jun 29 '11 at 10:53
  • Disrupting a chain of notifications is not good. If only half the handlers are called (and which half depends on the invocation order, which isn't even guaranteed), then that introduces an additional problem that can put the app in an unstable state. Throwing the error is worse than containing and forwarding it, because while in both cases it can/will be handled at a higher level, the case where you interrupt the invocation list simply introduces that additional, unnecessary problem of some unknown set of subscribers failing to receive notification of the event. – Triynko Jun 29 '11 at 18:23
  • I know I said ignoring the errors from each handlers is good, but that's not exactly what I meant. For the reason I just explained, they need to be contained (and in that sense temporarily ignored) so that the entire event chain executes. They should certainly not be ignored permanently. By forwarding the errors, you avoid an unnecessary instability, and still get to handle them at a higher level, even if it's just writing them to a log file. – Triynko Jun 29 '11 at 18:26
  • 2
    @Triynko: I agree with you that disrupting the notifications is not good, but the question is whether the alternative is worse. An analogy: suppose you have a business process whereby each member of the management chain of an employee must be notified in person before a promotion occurs. A reasonable process. Now suppose that halfway through that chain of notification meetings, the building catches on fire. Do you ignore the fire alarm and keep on going from office to office, telling the vice presidents that Bob is going to be promoted, or do you evacuate the building? – Eric Lippert Jun 29 '11 at 18:29
  • 3
    @Triynko: What you are proposing is making a note that hey, the building caught on fire while I was in Mary's office, keep on going from office to office, and then when all the meetings are done, write down in your notes somewhere that the building is on fire. The right thing to do is to either *put out the fire* or *abandon all the current work, save what information you can, and call in the fire department*. – Eric Lippert Jun 29 '11 at 18:34
  • A lot of your discussion on deadlock revolves around the case where you do not hold the lock throughout execution of the invocation list. Since I AM holding the lock, no one else can take it, and that actually reduces the possibility of deadlock to a very specific, easily avoidable case, which I've already explained. Also, by holding the lock there's a guarantee some other thread didn't change the list, incorrectly expecting removed methods to not execute. – Triynko Jun 29 '11 at 18:37
  • @Eric: An interesting analogy, but in software development the goal is to debug and correct the software. You cannot turn back time after a building has burned down like you can click restart debugger, so priorities differ. Also, this event dispatcher cannot just assume a particular error is worth crashing the application over. Since the error will be handled at a higher level in some way, it's better to not disrupt notification of event subscribers, IMO, because that introduces additional uncertainty, especially if the higher-level handler decides the error was not severe enough to crash. – Triynko Jun 29 '11 at 18:43
  • @Triynko: Holding a lock while executing arbitrary never reduces the possibility of deadlocks; it is exactly what causes the deadlocks. I'm not scientific about it, but I do believe that is the single most common cause of deadlocks. – configurator Jun 30 '11 at 20:05
  • @Triynko: Say you're sending an event about closing the application. The first handler clears some temporary data within the document you're currently editing. The second handler asks the user whether or not to save the document. If the first handler threw an exception causing the document to be in an invalid state, do you still want the user to save the corrupted document, overwriting his precious older version, before being told that is corrupt and irrecoverable? – configurator Jun 30 '11 at 20:08
  • @configurator: A "close" event should be handled as a request to close the document. This should be a serial task, starting with the user confirming the closure of the document. The scenario you're suggesting is simply poor design, because you're taking two dependent tasks (cleanup and asking permission to do so) that should be serialized, and you're treating them as independent tasks to be executed in a parallel (i.e. logically atomic) manner. Even though the events aren't executed in parallel (neither are threads technically), they logically occur at the same time - so the example is bad. – Triynko Jun 30 '11 at 21:08
  • @configurator: "Holding a lock while executing arbitrary never reduces the possibility of deadlocks". You are generalizing, and you're incorrect. In this particular case, holding the lock prevents another thread from obtaining it while handlers are executing. That is a guarantee. Exactly because of that guarantee or constraint, the conditions under which deadlock can possibly occur are reduced to a very specific case, where the lock holder spawns or references and WAITS on a separate thread AND that thread attempts to take out the lock by referencing the event. A VERY specific condition. – Triynko Jun 30 '11 at 21:13
  • @configurator: So the condition under which deadlock can occur IS reduced to a specific condition, BECAUSE of the additional constraint/guarantee introduce BY holding the lock throughout invocation of the list. In turn, this very specific deadlock condition is well-understood, and is in fact easily avoidable. In actuality, you'd have a private SafeEvent variable, referenced and called only from within a form's OnClose method and it's impossible to cause deadlock unless you very poorly design the app's close handler to recurse into OnClose from another thread AND wait on that thread. – Triynko Jun 30 '11 at 21:24
  • @Triynko: I suspect you don't fully understand how deadlocks occur. I encourage you to read about lock hierarchies. http://en.wikipedia.org/wiki/Dining_philosophers_problem#Resource_hierarchy_solution – configurator Jul 01 '11 at 00:43
  • 2
    @configurator: What he's saying, I think, is that there isn't going to be a deadlock caused by running arbitrary code under a long-held lock because the long-held lock is never going to be contended. But if you already have a guarantee that the lock is never going to be contended then *why have a lock in the first place*? The whole scenario is messed up. – Eric Lippert Jul 01 '11 at 13:42
2

The lock inside RaiseEventSafely is both unnecessary and dangerous.

It is unnecessary because delegates are immutable. Once you read it, the invokation list you obtained will not change. It doesn't matter if the changes happen while event code runs, or if the changes need to wait until after.

It is dangerous because you're calling external code while holding a lock. This can easily lead to lock order violations and thus deadlocks. Consider an eventhandler that spawns a new thread that tries to modify the event. Boom, deadlock.

The you have an empty catch for exception. That's rarely a good idea, since it silently swallows the exception. At minimum you should log the exception.

Your generic parameter doesn't start with a T. That's a bit confusing IMO.

where EventType:Delegate I don't think this compiles. Delegate is not a valid generic constraint. For some reason the C# specification forbids certain types as a generic constraint, and one of them is Delegate. (no idea why)

CodesInChaos
  • 106,488
  • 23
  • 218
  • 262
  • "It doesn't matter if the changes happen while event code runs." -- That is exactly where I disagree. I'm well aware that no enumerator will be interrupted (GetInvocationList returns an array after all), but the whole point of the lock is so that the invocation list is not altered while the list is being executed. This ensures that no assumptions about the set of notified objects change while those objects are being notified! – Triynko Jun 28 '11 at 22:04
  • Delegate is a base class. It compiles and it is valid. The C# specification allows the where clause to specify a base class (but it must be the first item in the constraint list). – Triynko Jun 28 '11 at 22:05
  • "Consider an eventhandler that spawns a new thread that tries to modify the event. Boom, deadlock." - Not true. If it spawns a new thread, the spawning thread will finish, the invocation list will complete, the lock will be released, and the spawned thread will run. – Triynko Jun 28 '11 at 22:06
  • ...continued... Deadlock would occur only if the spawning thread waited for the spawned thread to finish. But why spawn a new thread to alter the invocation list? It can be done directly in the handler (no external thread necessary), and since the invocation list of an event is not publicly accessible and is locked, the inconsistent state is not publicly visible. The only thing I might want to add is a recursion lock with a Monitor, that uses Pulse to allow the next atomic invocation of the list. – Triynko Jun 28 '11 at 22:25
  • The C# specification explicitly forbids `Delegate` as a generic constraint and it does not compile in my tests. – CodesInChaos Jun 28 '11 at 22:32
  • My deadlock example is obviously contrived in that form. But there are realistic situations where this can become a problem. For example the eventhandler might invoke another function in the GUI thread, which is a common pattern. And that function happens to subscribe an eventhandler. – CodesInChaos Jun 28 '11 at 22:38
  • Having usercode inside a lock makes reasoning about the lock order much harder. And ensuring a lock order is hard enough for me without such additional restrictions. – CodesInChaos Jun 28 '11 at 22:41
  • One famous example for the problems caused by holding a lock while calling usercode is the loader lock. It's the reason almost everything is forbidden inside `DllMain`. – CodesInChaos Jun 28 '11 at 22:44
  • I see now that the compiler makes a special exception for Delegate base class. What a crock, lol. Anyway, I don't see a problem with reasoning about the lock order, my feeling here is that these rules are a way of catering to the least common denominator, even though they're basically incorrect. If you sidestep the dogma and do what's reasonable, it's amazing how these problems vanish. I have no problem with this stuff. – Triynko Jun 28 '11 at 23:25
1

Have you looked into the PRISM EventAggregator or MVVMLight Messenger classes? Both of these classes fulfill all your requirements. MVVMLight's Messenger class uses WeakReferences to prevent memory leaks.

Community
  • 1
  • 1
Dave Swersky
  • 34,502
  • 9
  • 78
  • 118
1

Aside from it being a bad idea to swallow exceptions, I suggest you consider not locking while invoking the list of delegates.

You'll need to put a remark in your class's documentation that delegates can be called after having been removed from the event.

The reason I'd do this is because otherwise you risk performance consequences and possibly deadlocks. You're holding a lock while calling into someone else's code. Let's call your internal lock Lock 'A'. If one of the handlers attempts to acquire a private lock 'B', and on a separate thread someone tries to register a handler while holding lock 'B', then one thread holds lock 'A' while trying to acquire 'B' and a different thread holds lock 'B' while trying to acquire lock 'A'. Deadlock.

Third-party libraries like yours are often written with no thread safety to avoid these kinds of issues, and it is up to the clients to protect methods that access internal variables. I think it is reasonable for an event class to provide thread safety, but I think the risk of a 'late' callback is better than a poorly-defined lock hierarchy prone to deadlocking.

Last nit-pick, do you think SafeEventHandler really describes what this class does? It looks like an event registrar and dispatcher to me.

David Gladfelter
  • 4,175
  • 2
  • 25
  • 25
  • "I suggest you consider copying the list of event delegates to a temporary" Why would that be necessary? It is already a copy. – CodesInChaos Jun 28 '11 at 21:42
  • Thanks, I didn't look closely at that. Edited. – David Gladfelter Jun 28 '11 at 21:45
  • @CodeInChaos - The point is to allow the release of the lock during the invocations. In the "final code", the delegate appears as a method parameter, which would be a copy, but this seems to be an error (the use of the lock implies that the field was meant to be used since that is what the lock is for). – Jeffrey L Whitledge Jun 28 '11 at 21:55
  • 1
    To implement what appears to be intended, the parameter should be removed, and the field should be copied into a temporary while holding the lock. – Jeffrey L Whitledge Jun 28 '11 at 21:58
  • Code remarks do not need added, because the delegates cannot be called after having been removed from the event. The lock is held while invoking the entire delegate list, so delegates cannot be called after being removed from the list. And it's not risking deadlocks, because the only way that would happen is if the handlers were to spawn a separate thread AND wait on it AND the new thread would have to access public members of the SafeEvent... all unlikely. As for the name, it was originally developed as a wrapper for an EventHandler delegate, but I did reduce the name to SafeEvent. – Triynko Jun 29 '11 at 04:12
1

It is a bad practice to swallow exceptions entirely. If you have a use case where you would like a publisher to recover gracefully from an error raised by a subscriber then this calls for the use of an event aggregator.

Moreover, I'm not sure I follow the code in SafeEventHandler.RaiseEventSafely. Why is there an event delegate as a parameter? It seems to have no relationship with the event_handlers field. As far as thread-safety, after the call to GetInvocationList, it does not matter if the original collection of delegates is modified because the array returned won't change.

If you must, I would suggest doing the following instead:

class MyClass
    {
        event EventHandler myEvent;

        public event EventHandler MyEvent
        {
            add { this.myEvent += value.SwallowException(); }
            remove { this.myEvent -= value.SwallowException(); }
        }

        protected void OnMyEvent(EventArgs args)
        {
            var e = this.myEvent;
            if (e != null)
                e(this, args);
        }
    }

    public static class EventHandlerHelper
    {
        public static EventHandler SwallowException(this EventHandler handler)
        {
            return (s, args) =>
            {
                try
                {
                    handler(s, args);
                }
                catch { }
            };
        }
    }
eulerfx
  • 36,769
  • 7
  • 61
  • 83
  • "It is a bad practice to swallow exceptions entirely." OMG. You guys need to understand that while that is true, they MUST be handled by the event subscriber. It is INCORRECT to propagate the error from the event publisher, because the publisher doesn't care that the event is handled at all, let alone whether an error occurs while the handler is running. THE HANDLER MUST HANDLE THE ERROR, it's not the responsibility of the event publisher and IMO it would be an error to try to handle or rethrow an error it cannot possibly do anything about. Ignoring is correct in this scope. – Triynko Jun 28 '11 at 22:09
  • "it does not matter if the original collection of delegates is modified because the array returned won't change." - I disagree. While the returned list will not change, IT COULD BECOME OBSOLETE WHILE RUNNING, which would introduce an inconsistant state into the application. Until the invocation list is executed atomically, the invocation list from an external viewpoint or the viewpoint of the application as a whole should not change... hence the whole purpose of the lock. – Triynko Jun 28 '11 at 22:13
  • 1
    @Triynko: Regarding your last point: Sure, it could. Your multithreaded event handlers are *required* to ensure that they are able to run correctly to completion even if they are invoked in a scenario where state they need is being destroyed on other threads. If a method cannot make that guarantee *then you should not be using it as an event handler*. Similarly if a method cannot guarantee that it will not throw an unhandled exception *then you should not be using it as an event handler*. – Eric Lippert Jun 28 '11 at 22:45
  • @Eric: That's what I said. The event handler should handle its own errors. Otherwise, it shouldn't be used. If that condition is violated, the application is incorrect, but that should not necessarily crash the application, which would be the only feasible way of "handling" the error. Now... "reporting" the error would not be a bad idea, but I don't consider that "handling" it, because it doesn't actually change what happens... the program continues the same way. So logging and continuing I think is the best fallback. Incorrect operation will become apparent, and logs will indicate why. – Triynko Jun 28 '11 at 23:19
  • Just to be clear, I don't think the error should be "ignored", but I also don't think it should be "handled" or "thrown", because the component cannot possibly know how how to handle such an error, and throwing it could crash the app unnecessarily and is an inefficient means of simply reporting the errors through an output parameter, which is the best course of action. – Triynko Jun 29 '11 at 00:27
1

Juval Löwy provides an implementation of this in his book "Programming .NET components".

http://books.google.com/books?id=m7E4la3JAVcC&lpg=PA129&pg=PA143#v=onepage&q&f=false

bic
  • 2,201
  • 26
  • 27
  • Thanks bic. One of the few mature replies to my post :) It's also discussed here (http://www.yoda.arachsys.com/csharp/events.html) under Thread-Safe events, as well as here (http://msdn.microsoft.com/en-us/vcsharp/bb508935) under "Error Handling Due to Sequential Invocation". – Triynko Jun 29 '11 at 04:56
  • The Skeet article you linked warns not to use locks the way you do: "Note that you need to assign the current value to a local variable inside the lock (to get the most recent value) and then test it for nullity and execute it outside the lock: holding the lock whilst raising the event is a very bad idea, as you could easily deadlock. " and the msdn article doesn't lock at all. – CodesInChaos Jun 29 '11 at 11:08
  • @CodeInChaos: The article holds locks the way I do, to a certain extent. It holds them in the add/remove methods of the event property. It also holds a lock, like I do, when acquiring the invocation list. Where we differ, is that I hold the lock while the list is being executed. That is the only way to guarantee the logical invocation list isn't changed by another thread while the list is executing. The deadlock potential is a very specific and well-understood condition, requiring the lock-holder to wait on a thread that accesses the SafeEvent's public properties. It's easy to avoid. – Triynko Jun 29 '11 at 17:50
  • @CodeInChaos: The MSDN article is meant to demonstrate another point I made. Eric Lippert wrote a long winded reply, starting off saying that having an invocation list terminate in the middle is a good thing. I disagree. It's a very bad thing, because notifying some components and not others can make the app unstable. Specifically, if an error is throw and handled at a higher level... you're app still continues, but now there's this unknown condition where half the handlers were called and half were not, which is very unpredictable. By containing the errors, you avoid that extra problem. – Triynko Jun 29 '11 at 17:55
-2

I considered everything everyone said, and arrived at the following code for now:

public class SafeEvent<EventDelegateType> where EventDelegateType:class
{
    private object collection_lock = new object();
    private Delegate event_handlers;

    public SafeEvent()
    {
        if(!typeof(Delegate).IsAssignableFrom( typeof(EventDelegateType) ))
            throw new ArgumentException( "Generic parameter must be a delegate type." );
    }

    public Delegate Handlers
    {
        get
        {
            lock (collection_lock)
                return (Delegate)event_handlers.Clone();
        }
    }

    public void AddEventHandler( EventDelegateType handler )
    {
        lock(collection_lock)
            event_handlers = Delegate.Combine( event_handlers, handler as Delegate );
    }

    public void RemoveEventHandler( EventDelegateType handler )
    {
        lock(collection_lock)
            event_handlers = Delegate.Remove( event_handlers, handler as Delegate );
    }

    public void Raise( object[] args, out List<Exception> errors )
    {
        lock (collection_lock)
        {
            errors = null;
            foreach (Delegate handler in event_handlers.GetInvocationList())
            {
                try {handler.DynamicInvoke( args );}
                catch (Exception err)
                {
                    if (errors == null)
                        errors = new List<Exception>();
                    errors.Add( err );
                }
            }
        }
    }
}

This bypasses the compiler's special treatment of the Delegate as an invalid base class. Also, events cannot be typed as Delegate.

Here is how a SafeEvent would be used to create an event in a class:

private SafeEvent<SomeEventHandlerType> a_safe_event = new SafeEvent<SomeEventHandlerType>();
public event SomeEventHandlerType MyEvent
{
    add {a_safe_event.AddEventHandler( value );}
    remove {a_safe_event.RemoveEventHandler( value );}
}

And here is how the event would be raised and errors handled:

List<Exception> event_handler_errors;
a_safe_event.Raise( new object[] {event_type, disk}, out event_handler_errors );
//Report errors however you want; they should not have occurred; examine logged errors and fix your broken handlers!

To summarize, this component's job is to publish events to a list of subscribers in an atomic manner (i.e. the event will not be re-raised and the invocation list will not be changed while the invocation list is executing). Deadlock is possible but easily avoided by controlling access to the SafeEvent, because a handler would have to spawn a thread that calls one of the public methods of the SafeEvent and then wait on that thread. In any other scenario, other threads would simply block until the lock owning-thread releases the lock. Also, while I do not believe in ignoring errors at all, I also do not believe that this component is in any place to handle subscriber errors intelligently nor make a judgement call about the severity of such errors, so rather than throw them and risk crashing the application, it reports errors to the caller of "Raise", since the caller is likely to be in a better position to handle such errors. With that said, this components provides a kind of stability to events that's lacking in the C# event system.

I think what people are worried about is that letting other subscribers run after an error has occurred means they are running in an unstable context. While that might be true, that means the application is in fact written incorrectly any way you look at it. Crashing is no better a solution than allowing the code to run, because allowing the code to run will allow errors to be reported, and will allow the full effects of the error to be manifest, and this, in turn, will assist engineers to more quickly and thoroughly understand the nature of the error and FIX THEIR CODE FASTER.

Triynko
  • 18,766
  • 21
  • 107
  • 173
  • 1
    While IMO your error handling is now acceptable, you still have the dangerous `lock`ing pattern Jon Skeet, Eric Lippert and myself warned you about. I would hardly call the problem `unfounded`. – CodesInChaos Jun 29 '11 at 11:14
  • I'd put the delegate type check into the constructor: `if(!typeof(Delegate).IsAssignableFrom(typeof(EventDelegateType)))throw new ArgumentException("Generic parameter must be a delegate type");` – CodesInChaos Jun 29 '11 at 11:16
  • I added the run-time type check for Delegate base class to the constructor as suggested. The only point of contention that remains is holding the lock while executing the invocation list. Again, it's a very specific case where the lock-holder is able to reference and wait on a separate thread which in turn tries to access properties or methods of the SafeEvent. In my application, this is easily avoided, and it's more important that when errors ARE handled at a higher level, there's a guarantee some other thread didn't change the list, incorrectly expecting removed methods to not execute. – Triynko Jun 29 '11 at 18:08
  • Also, I removed the word "unfounded" relating to fears of deadload. I wasn't saying it's impossible, I was saying that deadlock is possible under a specific condition, but it's easily avoided, so there's nothing to fear. So the fear is unfounded, not the suspicion of deadlock itself. I also added thread safety to the `Handlers` property's get-accessor, which now clones the delegate while locked. – Triynko Jun 29 '11 at 18:15
  • 1
    just write a minidump (or a full dump) on the error and hey presto you have the ultimate way to track down the problem. If you expect Users to debug weird behaviour in their apps from some phsychic power then heaven help me if I ever use one of yours. – ShuggyCoUk Jun 30 '11 at 08:16
  • @ShuggyCoUk: SafeEvent is a component, not an app. Why is this so hard for you to understand. SafeEvent can do one of two things. Option 1: It can catch and aggregate errors thrown from individual handlers, ensuring all handlers/components are properly notified of an event and then throw an aggregate error for handling. Option 2: It can allow a single bad handler to interrupt the notification of other components of an event by allowing an error to be raised for handling, but now the app is unstable because only some of the components were notified. – Triynko Jul 12 '11 at 20:39
  • ..continued. The point is this: in either case, the error is propagated to a higher level for handling. The difference is that in option 1, the app is more stable, because all event handlers are notified (i.e. the event is dispatched in a logically atomic manner). If the error CAN be handled, having the event handler chain stay intact is the best option, because you avoid the unknown condition where some handlers are called and some are not depending on which handler throws the error. Handler's shouldn't throw errors, but IF they do... this component ensures it does its own job completely. – Triynko Jul 12 '11 at 20:41
  • ..continued. So when I say "because allowing the code to run will allow errors to be reported," I mean that reducing the impact of an error from one handler by isolating it will increase the stability of the application as a whole, and will allow errors to be reported or handled without crashing the app or leaving a higher-level handler unable to handle a simple error because the app was left in an unstable state by a delegate that does a half-assed job at notifying all registered handlers. SafeEvent ensures ALL handlers are notified, and gives higher-level error handlers a fighting chance. – Triynko Jul 12 '11 at 20:51
  • I'm clearly not going to convince you, nor are all the others. I would refuse to hire you on the basis of your design rationale, you would likely not hire me. Further discussion is probably unproductive – ShuggyCoUk Jul 13 '11 at 07:23