3

What negative/undefined behaviour could arise, from calling a save function (ala boost-serialize) within a class's ~dtor?

aaronman
  • 18,343
  • 7
  • 63
  • 78
Ælex
  • 14,432
  • 20
  • 88
  • 129
  • Are you trying to serialize the object being destroyed or something else BTW – aaronman Nov 08 '13 at 00:09
  • @aaronman Oh no, just asking out of curiosity. To me it seemed for a moment like the ideal time to save, just when the object is about to die. – Ælex Nov 08 '13 at 00:10
  • Ok because if it's not the actual object getting destroyed you might be able to set a flag or something that alerts another function to serialize (you wouldn't want to call the function directly) – aaronman Nov 08 '13 at 00:12

4 Answers4

7

You have two concerns, one of which is a consequence of the other:

1) You should not allow any exception to escape the destructor. If you do, and if the destructor is being called as part of stack unwinding, then the runtime will terminate() your program. This is not undefined behaviour, but it's pretty negative.

Because of this (and also of course because destructors don't return a value):

2) There's no reasonable way for your destructor to indicate success or failure ("reasonable" meaning, without building some kind of separate error-reporting system). Since the user of your class might want to know whether the save happened or not, preferably with a sensible API to do so, this means that destructors can only save data on a "best effort" basis. If the save fails then the object still gets destroyed, and so presumably its data is lost.

There is a strategy for such situations, used for example by file streams. It works like this:

  • have a flush() (or in your case save()) function that saves the data
  • call this function from the destructor if the object has not already been saved/flushed (or more likely: call it unconditionally but have the function itself know whether it needs to do any real work or not). In the case of file streams this happens via close(). Catch any exceptions it can throw and ignore any errors.

That way, users who need to know whether the save succeeded or not call save() to find out. Users who don't care (or who wouldn't mind it succeeding if possible in the case that an exception is thrown and the object is destroyed as part of stack unwinding) can let the destructor try.

That is, your destructor can attempt to do something that might fail, as a last-ditch effort, but you should additionally provide a means for users to do that same thing "properly", in a way that informs them of success or failure.

And yes, this does incidentally mean that using streams without flushing them and checking the stream state for failure is not using them "properly", because you have no way of knowing whether the data was ever written or not. But there are situations where that's good enough, and in the same kinds of situation it might be good enough for your class to save in its destructor.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
3

The issue is that boost-serialize can throw an exception. That means if the destructor is being called because an exception is propagating and is cleaning up the stack as it unwinds then your application will terminate if the destructor of the object throws another exception.

So to summarize, you always only want one exception propagating at a time. If you end up with more then one then your application will close which defeats the purpose of exceptions.

Caesar
  • 9,483
  • 8
  • 40
  • 66
2

It is a bad idea.

  1. A destructor should never throw, IO operations are very likely to throw since wether IO succeeds or not is basically out of your control.
  2. To me at least it's extremely unintuitive
    a. for one it ensures that every object of that type will be serialized (unless the destructor has checks to prevent that)
    b. destructors have a very clear purpose, to clean up, storing data is basically the opposite of cleaning up.

So I just want to make one more point, what do you have to gain by serializing in the destructor.


you know the serialization will run even if there is an exception, if you are making use of RAII. But this isn't so much of a benefit because even though the destructor will run you can't guarantee the serialize will run since it throws (in this case at least). Also you lose a lot of the ability to properly handle a failure.

Community
  • 1
  • 1
aaronman
  • 18,343
  • 7
  • 63
  • 78
  • If you hadn't said bluntly : It is a bad idea ,I would have upvoted. – engf-010 Nov 08 '13 at 00:34
  • @Edwin I think it is a bad idea, even if you handle the throwing issue it ends up being pretty unintuitive storing in a file from a destructor – aaronman Nov 08 '13 at 00:35
  • Then your intuition is wrong (intuitions mostly are ,uless you're a woman of course ,lol). – engf-010 Nov 08 '13 at 00:38
  • I don't think the intuition is wrong, but it doesn't cover all situations. As I mention in my answer: standard streams save data in their destructors, so your C++ intuition should allow for that. The intuition is right that you shouldn't make it the *purpose* of the destructor to save the data. It's just that if the the object has unfinished business when it's destroyed then this is its last chance to finish it. – Steve Jessop Nov 08 '13 at 00:43
  • @SteveJessop the OP has indicated this is a hypothetical situation, so it's difficult to form an opinion on the right way to solve this without something to go on, I would bet if he produced an example there would be a better way to handle it that serializing in the destructor – aaronman Nov 08 '13 at 00:45
  • True. I'm cautious around hypothetical situations, not to rule out anything that is sometimes applicable. But there's another valid approach which is to lay down rules to novices, and if someone is smart enough to find a case where the rule doesn't apply then they can break it. I just prefer giving people too much information. – Steve Jessop Nov 08 '13 at 00:47
  • @SteveJessop either way your answer clearly has some solutions I wouldn't have thought of so good job – aaronman Nov 08 '13 at 00:48
1

No ,it's not a bad idea ,but it isn't a terribly good idea either ! But sometimes it's right thing to do.

As long as you protect your destructor from throwing exceptions ,there is nothing against it.

engf-010
  • 3,980
  • 1
  • 14
  • 25
  • actually it is a pretty bad idea, read the other answers here – aaronman Nov 08 '13 at 00:23
  • @aaronman: so far you're the only one who says it's bad. The other answer so far doesn't state it's good or bad. I think you're wrong. – engf-010 Nov 08 '13 at 00:26
  • Suit yourself, IMO Caesars answer definitely doesn't recommend doing it – aaronman Nov 08 '13 at 00:29
  • @aaronman: recommendation and stating it's a bad idea are two completely different things. And I also didn't recommend it ,but sometimes it's the only/right thing to do. – engf-010 Nov 08 '13 at 00:32