12

In Delphi 7 a the creation of an object was like this:

A := TTest.Create;
try
  ...
finally
  A.Free;
end;

However in a blog article Marco Cantù says that in Embercadero they use

A1 := nil;
A2 := nil;
try
  A1 := TTest.Create;
  A2 := TTest.Create;
  ...
finally
  A2.Free;
  A1.Free;
end;

Was something changed in the logic of try finally block during version upgrades? The second example seems like a typical error to me!

David Heffernan
  • 601,492
  • 42
  • 1,072
  • 1,490
Chopin
  • 147
  • 6
  • 9
    In the second example you create *two* objects, not one! Still, I prefer nested try..finally blocks in that case. – Andreas Rejbrand Feb 05 '18 at 12:20
  • The second code you posted is about creating more than one object. If you create only one object your first code example is fine. As Marco Cantu states in his blog post: "The issue with this code is that in case the creation of A2 fails (and there could be many reasons), the A1 object would remain in memory." – Philipp H. Feb 05 '18 at 12:21
  • the code is correct because of the pre assignment to nil, if the constructor fails, the assignment on create fails and free wont be called since the variable is nil.. – whosrdaddy Feb 05 '18 at 12:23
  • 9
    I know it should not happen, but there can be a leak if destructor `A2.Free;` raises an exception. I'm for option 1, nested blocks. – Victoria Feb 05 '18 at 12:28
  • 1
    The funny thing is that Marco says they are going to use the code presented by the OP for their class library code. *shrug* – whosrdaddy Feb 05 '18 at 12:39
  • No. Nothing changed at all. Read the article more closely. – David Heffernan Feb 05 '18 at 13:27
  • There are other alternatives, too: [Avoiding nested try…finally blocks in Delphi](https://stackoverflow.com/q/18497085/859646) – JRL Feb 05 '18 at 14:03
  • @DavidHeffernan: Marco says "I personally mostly used #1 in my books, but at Embarcadero we tend to prefer the cleaner and faster ones (that is, #2 or #3) for library code". At least he's considering it? :) – whosrdaddy Feb 05 '18 at 14:34
  • @whosrdaddy Not so much version 1 as presented here, but a variant with nested try for the lifetime of the second object – David Heffernan Feb 05 '18 at 15:02
  • @Victoria That approach doesn't fix the leak – David Heffernan Feb 05 '18 at 15:13
  • 1
    @David, if `A2` destructor raises an exception, it will leak not only itself, but also `A1` because its destructor won't get invoked. Nested block will fix at least that. And I'm not saying about fix. It's my preferable way of _coding_. Destructor should never raise an exception. That's the most important. – Victoria Feb 05 '18 at 15:30
  • 2
    @Victoria Now you have naff ugly code that still leaks. If you can't fix everything, it's better to have clean code than to fix half of the problem and have unreadable ugly code. And the issue is far bigger than simple leaking of heap memory. Resources, file handles, etc. won't be returned to the system. Leading to consequential failures, data corruption and lord knows what. The best thing to do if a destructor fails to run to completion is to forcibly terminate the process. – David Heffernan Feb 05 '18 at 15:38

2 Answers2

26

Both are acceptable patterns. And this is not something that's changed.

First let's cover the one you're familiar with and why it's correct.

{ Note that here as a local variable, A may be non-nil, but
  still not refer to a valid object. }
A := TTest.Create;
try
  { Enter try/finally if and only if Create succeeds. }
finally
  { We are guaranteed that A was created. }
  A.Free;
end;

In the above: If A had been assigned after try, then there's a possibility that Create could fail and jump here. This would attempt to Free an object from an undefined location in memory. It can lead to an Access Violation or unstable behaviour. Note that the compiler would also give a warning that on A.Free; that A might be uninitialised. This is because of the possibility of jumping to the finally block before A is assigned due to an exception in the constructor.


So why is Marco's code acceptable?

A1 := nil; { Guarantees A1 initialised *before* try }
A2 := nil; { Guarantees A2 initialised *before* try }
try
  A1 := TTest.Create;
  A2 := TTest.Create;
  ...
finally
  { If either Create fails, A2 is guaranteed to be nil.
    And Free is safe from a nil reference. }
  A2.Free;
  { Similarly, if A1's Create fails, Free is still safe.
    And if A1's create succeeds, but A2's fails: A1 refers to a valid
    object and can be destroyed. }
  A1.Free;
end;

Note that Marco's code relies on some subtleties of the behaviour of Free(). See the following Q&A for more information:


The purpose behind the technique is to avoid nested try..finally blocks which can get messy. E.g.

A1 := TTest.Create;
try
  A2 := TTest.Create;
  try
    {...}
  finally
    A2.Free;
  end;
finally
  A1.Free;
end;

Marco's code reduces nesting levels, but requires 'pre-initialisation' of the local references.


Victoria has raised a caveat that if the destructor for A2 fails in Marco's code, then A1 will not be Freed. This would be a certain memory leak. However, I'd argue that as soon as any destructor fails:

  • it has not completed successfully;
  • so will probably already leak at least memory or resources;
  • and also, the overall integrity of your system falls into doubt. If 'simple cleanup' has failed: why, what went wrong, what future problems is this going to cause?

So the best advice I can offer is: take care to ensure the correctness of your destructors.

Disillusioned
  • 14,635
  • 3
  • 43
  • 77
  • 7
    As Victoria pointed out, the only caveat is that your destructor may never throw an AV or else you will have a leak, maybe add that to your answer... – whosrdaddy Feb 05 '18 at 13:00
  • 3
    @whosrdaddy I agree it's a caveat; but if destructors are failing, you probably have more important things to worry about. It's a good reason to ensure destructors only concern themselves with the single task of destruction. – Disillusioned Feb 05 '18 at 13:14
  • 2
    The best practise is to assume that destructors never raise exceptions. Since you can't realistically handle that scenario, ensure it never happens, and code accordingly. – David Heffernan Feb 05 '18 at 15:03
  • 1
    Destructors themselves may not raise an exception, however it is possible for a destructor to *cause* an exception, which is why scenario 2 and 3 should always be used with caution – Dave Nottage Feb 05 '18 at 19:28
  • @DaveNottage I'm not sure exactly what you mean by "Destructors themselves may not raise an exception". Whether `raise ;` is called inside **destructor** or in a method it calls is hardly relevant. It's still an exception _within the context of_ the destructor. And in that regard: any exception, in any destructor (context), in _any **try..finally** pattern_ means 1 or more destruction steps ***failed***. Therefore some cleanup will have failed. Meaning some resource (memory, handle, lock etc.) is not released. No **try..finally** can guarantee perfect safety. – Disillusioned Feb 06 '18 at 13:40
  • @DaveNottage Incorrect. Destructors can raise exceptions. – David Heffernan Feb 06 '18 at 13:52
  • I didn’t say they cannot raise an exception – Dave Nottage Feb 06 '18 at 18:07
  • @CraigYoung I meant that they might not specifically raise an exception (by calling raise), but they might execute code that *causes* an exception eg one thrown by the OS. Unless you know for certain that a destructor will never *cause* an exception, the scenario described cannot be relied upon to be safe – Dave Nottage Feb 06 '18 at 18:11
  • @DaveNottage You still don't explain why the distinction is relevant? And you still seem to miss the point that there is _absolutely **no** try..finally pattern_ (including nested) that would be safe. E.g. `destructor Destroy; begin for Obj in List do Obj.Free; List.Free; inherited; end;` If an exception is raised trying to destroy the first object in `List`, then: the rest of the objects in `List` leak; `List` itself leaks; and the rest of the memory/handles/locks/resources in the destructors class _also leak_. Ypur app is likely to be unstable; you safest bet is to restart. – Disillusioned Feb 06 '18 at 22:14
  • The point is there is no pattern for exceptions in destructors that is 100% safe. So code to protect against exceptions in other areas of code; ***obviously***. But it's not really worth the effort to guard against exceptions in destructors. You just make you code messy and can never solve the problem. So: don't spend time on it; leave your code as clean as possible; and rather spend more time reducing mistakes that might cause an exception in a destructor. – Disillusioned Feb 06 '18 at 22:21
  • In a finally block, the first statement is guaranteed to start executing, regardless of what follows. If the first statement causes an exception, the next statement in the finally block is not executed, ergo if freeing the first object causes an exception, the next statement (freeing the other object) in the finally block is not executed, thus it is a *less* safe scenario – Dave Nottage Feb 06 '18 at 22:21
  • @DaveNottage Who's to say for certain it's safer? What happens if the inner object is supposed to set an event that the outer object waits on? You could block the thread by going to the effort to guarantee it gets destroyed. This is why destructors failing can destabilise your app. It's not worth worrying about how to recover - rather kill the app. (Side note, I've read about languages that completely disallow exception within an exception and would immediately halt the app because the scenario is unrecoverable. Failing destructors is somewhat similar.) – Disillusioned Feb 06 '18 at 22:32
15

There is one important addition to Craig's answer and explanation why using single try..finally block is also fine.

A1 := nil;
A2 := nil;
try
  A1 := TTest.Create;
  A2 := TTest.Create;
  ...
finally
  A2.Free;
  A1.Free;
end;

Potential problem with above code is that if A2 destructor raises or causes an exception A1 destructor will not be called.

From that standpoint above code is broken. But, the whole Delphi memory management is built on top of premise that destructors should never ever raise or cause an exception. Or in other words, if there is a code in destructor that can cause an exception, destructor must handle that exception on site and not allow it to escape.

What is the problem with destructors raising exceptions?

Raising exception in destructor will break calling destructor chain. Depending on the code, inherited destructors may not be called and they will not be able to perform proper cleanup resulting in memory or resources leak.

But even more important fact is that even if you have a single destructor that causes an unhandled exception, the FreeInstance method that releases object instance memory allocated on the heap will not be called and you will leak that object instance's memory.

That means following code will leak TTest instance heap memory if A.Free contains code that will cause an exception.

A := TTest.Create;
try
  ...
finally
  A.Free;
end;

The same is valid for nested try...finally blocks. If any of the destructors causes unhandled exception memory will be leaked.

While nested try...finally blocks will leak less memory than single try...finally blocks they will still cause the leak.

A1 := TTest.Create;
try
  A2 := TTest.Create;
  try
    ...
  finally
    A2.Free;
  end;
finally
  A1.Free;
end;

You can use as many try...finally blocks as you wish, or you can even use interfaces and automatic memory management, but an exception raising (causing) destructor will always leak some memory. Period.


How about BeforeDestruction?

The same rule that applies to destructors applies to BeforeDestruction method. Unhandled exception in BeforeDestruction will break object release process and destructor chain as well as FreeInstance will not be called resulting in memory leak.


Of course, proper handling of any exceptions inside BeforeDestruction method or destructors implies that you must make sure that all code that is responsible for any kind of cleanup, including calling inherited methods, that absolutely must be executed is executed during exception handling process.


We can certainly argue how much is some code broken, point is it is broken. All above examples will cause memory leak if any of the destructors causes an unhandled exception. And the only way such code can be properly fixed is by fixing broken destructors.


What exactly is handling exceptions?

Handling exceptions is done within try...except block. Any exception that is caught with that block and is not re-raised is handled. On the other hand try...finally blocks are used for cleanup (executing code that absolutely must run even in case of exception) and not for handling exceptions.

For instance if you have some code in BeforeDestruction or destructor doing string to integer conversion that code can raise EConvertError. You can catch that exception with try...except block and handle it there not letting it escape and cause havoc.

destructor TFoo.Destroy;
var
  x: integer;
begin
  try
    x := StrToInt('');
  except
    on E: EConvertError do writeln(E.ClassName + ' handled');
  end;
  inherited;
end;

If there is some cleanup code you must perform you can also use try...finally block inside and make sure that any cleanup code executes properly.

destructor TFoo.Destroy;
var
  x: integer;
begin
  try
    try
      x := StrToInt('');
    finally
      writeln('cleanup');
    end;
  except
    on E: EConvertError do writeln(E.ClassName + ' handled');
  end;
  inherited;
end;

Another way of handling exceptions - is preventing them in the first place. Perfect example is calling Free on your inner fields instead of calling Destroy. That way destructors can handle partially constructed instances and perform proper cleanup. If the FBar is nil FBar.Free will do nothing, but FBar.Destroy would raise an exception.

destructor TFoo.Destroy;
begin
  FBar.Free;
  inherited;
end;

How not to handle exceptions during destruction process

Don't go around writing try...except blocks in every destructor you have ever written. Not every single line of code can cause an exception, nor absolutely all exceptions everywhere should be eaten up.

Exceptions are exceptional events that may occur in some code under specific circumstances, but that does not mean you cannot recognize code that may cause an exceptions and protect it.

Also, wrapping up all code with try...except block will not keep you safe. You have to deal with exceptions within each destructor.

For instance if FBar destructor can cause an exception then you must handle that exception within TBar destructor. Wrapping it up in exception handler inside TFoo destructor will leak FBar instance because its destructor is flawed and it will not release FBar heap memory.

destructor TFoo.Destroy;
begin
  // WRONG AS THIS LEAKS FBar instance
  try
    FBar.Free;
  except
    ...
  end;
  inherited;
end;

This is proper handling of exception that may be raised in TBar destructor

destructor TBar.Destroy;
begin
  try
    // code that can raise an exception
  except
    ...
  end;
  inherited;
end;

destructor TFoo.Destroy;
begin
  FBar.Free;
  inherited;
end;
Dalija Prasnikar
  • 27,212
  • 44
  • 82
  • 159
  • As per my comment in the other answer, it's possible for a destructor to *cause* an exception (as opposed to raising one). This scenario *does not* cater for that – Dave Nottage Feb 05 '18 at 19:30
  • @DaveNottage Causing or raising exception is basically the same as far as memory management is concerned. I will edit the answer to make that more clear. Under no circumstances should exception escape destructor. No matter what kind of code you write in such scenario you will always create a leak. – Dalija Prasnikar Feb 05 '18 at 19:47
  • 1
    I know it’s the same thing: my point is about what is or is not “safe”. This scenario is less safe than nested blocks – Dave Nottage Feb 05 '18 at 19:49
  • 2
    @DaveNottage If it is broken does it matter how much? – Dalija Prasnikar Feb 05 '18 at 20:14
  • @DaveNottage No, a destructor can raise an exception. – David Heffernan Feb 06 '18 at 13:53
  • @Dalija A minor point to note. Even _handled_ exceptions in destructors could cause leaks if any 'destructor code' is skipped. – Disillusioned Feb 07 '18 at 02:24
  • @CraigYoung That goes without saying... if you don't properly handle any exception (if there is code that needs to run to perform cleanup and exception breaks that) regardless whether it is in destructor or not, you will have leaks. Of course, there is yet another thing that can break object release and that is unhandled exception in BeforeDestruction. – Dalija Prasnikar Feb 07 '18 at 10:28
  • @Dalija You seem to be misunderstanding my point. You claimed: "will cause memory leak if any of the destructors causes an **unhandled** exception". You've edited your question to make the same claim of `BeforeDestruction`: "**Unhandled** exception in BeforeDestruction will break object release process". You've also said: "**proper handling** of any exceptions inside BeforeDestruction". You keep implying (and explicitly claiming) that it is **possible** to in some way _handle_ exceptions in destructors. ***This is simply not true.*** No matter how you try handle exceptions you'll still leak. – Disillusioned Feb 09 '18 at 02:45
  • @CraigYoung I am confused. Of course you **can handle** exception in destructor. Just like in any other code. We all know what handling exceptions mean... catching it with `try...except` block and not letting it escape. Or preventing the exception in the first place. If you know you can have division by 0 you can either check for 0 before division or you can let it raise exception, you just have to be prepared for such code. Calling `.Free` in destructors instead of `Destroy` is exactly such code. You prevent potential exception if you are dealing with partially initialized object. – Dalija Prasnikar Feb 09 '18 at 08:14