1

In the games development world I often see classes using separate initialize() and uninitialize() or shutdown() methods. This includes not only multiple tutorials, but also long-established and large real-world projects, like some modern game engines. I've recently seen a class in Cry Engine 3 which not only uses a shutdown() method, but goes as far as calling this.~Foo() from it, which based on everything I know about C++, can't really be considered a good design.

While I can see some of the benefits that come from two-step initialization, and there are many discussions about it, I can't understand the reasoning behind the two-step destruction. Why not use the default facilities provided by C++ language in form of the destructor, but have a separate shutdown() method and the destructor left empty? Why not go even further and, using modern C++, put all the resources held by an object into smart pointers so we don't have to worry about releasing them manually.

Is two-step destruction some outdated design based on principles that no longer apply or are there some valid reasons to use it over standard ways of controlling objects' lifetime?

jaho
  • 4,852
  • 6
  • 40
  • 66
  • Well, an example scenario where a separate `shutdown()` method solves some problem which can't be solved by calling a destructor would be enough for an answer for me. Also explaining the reasoning behind some design pattern for someone who understands it shouldn't be that difficult. At least some relevant links would be great. – jaho Mar 09 '14 at 21:49
  • I'm _guessing_ it's split into two-steps to separate object creation/destruction from memory management. That is, the objects can have memory allocated without being initialized, and can be destructed without freeing the memory. I imagine level loading, for example, could be sped up with this method. Rather than create all needed objects when entering a level, instead just allocate a bunch of memory, and create objects as needed. – Fault Mar 09 '14 at 23:22
  • As for explicit destructor calls, Microsoft has a nice, concise summary of it: http://msdn.microsoft.com/en-us/library/35xa3368.aspx – Fault Mar 09 '14 at 23:23
  • @Fault yes it makes sense, but then I guess the same could be achieved with some kind of a `reset()` or `reinitialize()` function, plus perhaps a private `shutdown()`, which has the benefit of leaving the standard way of destroying objects intact. – jaho Mar 09 '14 at 23:42
  • Except `reset()` or `reinitialize()` would only allow for that one object to exist in that memory segment. With the explicit destructor call, they can destroy the object, and use the memory for something else. It's low level optimization. – Fault Mar 09 '14 at 23:55
  • Possibly informative and related question: http://stackoverflow.com/questions/130117/throwing-exceptions-out-of-a-destructor – AturSams Mar 12 '14 at 19:20

3 Answers3

3

If you don't want to read, the gist is that you need exceptions to return errors from ctors and exceptions are bad.

As Trevor and others have hinted at, there are a number of reasons for this practice. You've brought up a specific example here though, so let's address that.

The tutorial deals with a class GraphicsClass (the name sure doesn't inspire confidence) which contains these definitions:

class GraphicsClass
{
public:
    GraphicsClass();
    ~GraphicsClass();
    bool Initialize(int, int, HWND);
    void Shutdown();
};

So it has ctor, dtor, and Initialize/Shutdown. Why not condense the latter into the former? The implementation gives a few clues:

bool GraphicsClass::Initialize(int screenWidth, int screenHeight, HWND hwnd)
{
    bool result;

    // Create the Direct3D object.
    m_D3D = new D3DClass;
    if(!m_D3D)
    {
        return false;
    }

    // Initialize the Direct3D object.
    result = m_D3D->Initialize(screenWidth, screenHeight, VSYNC_ENABLED, hwnd, FULL_SCREEN, SCREEN_DEPTH, SCREEN_NEAR);
    if(!result)
    {
        MessageBox(hwnd, L"Could not initialize Direct3D", L"Error", MB_OK);
        return false;
    }

    return true;
}

Ok sure, checking to see if new D3DClass fails is pointless (it only happens if we run out of memory and we've overridden new to not throw bad_alloc)*. Checking to see D3DClass::Initialize() fails may not be though. As its signature hints at, it's trying to initialise some resources related to graphics hardware, which can sometimes fail in normal circumstances - maybe the resolution requested is too high, or the resource is in use. We'd want to handle that gracefully, and we can't return errors in the ctor, we can only throw exceptions.

Which of course raises the question: why don't we throw exceptions? C++ exceptions are very slow. So slow that opinions of it are very strong, especially in game development. Plus you can't throw in the dtor, so have fun trying to say, put network resource termination there. Most, if not all, C++ games have been made with exceptions turned off.

That's the main reason anyway; I can't discount other, sometimes sillier, reasons though, such as having a C legacy (where there are no ctors/dtors), or an architecture that has pairs of modules A and B hold references to each other. Of course remember games development's #1 priority is to ship games, not create perfectly robust and maintainable architectures, so you sometimes see silly practices like this.

I hear that the C++ committee is deeply aware of the problems that exceptions have, but iirc the latest is that it's been put in the "too hard" bucket, so you'll see more of this in games for many years to come.

*- Aha! So checking to see if new D3DClass wasn't pointless, as we've probably disabled exceptions so this is the only way to check for failed memory alloc, among other things.

Community
  • 1
  • 1
congusbongus
  • 13,359
  • 7
  • 71
  • 99
  • This is a very good explanation on why to use two-step *initialization*, but in the code provided you can see that all that `Shutdown()` method does is releasing some COM objects and perhaps deleting some dynamically allocated resources. It also doesn't return any value for us to react on, in case something's gone wrong. All this code could be easily moved to the destructor or avoided altogether by using smart pointers (`ComPtr` and `unique_ptr` respectively). – jaho Mar 10 '14 at 02:18
  • @Marian Ok I realised I didn't address destruction explicitly, but the reasoning is similar to the one about initialisation: since we're dealing with special resources such as graphics hardware, releasing these resources can fail too, and handling that solely in the destructor is tricky. Another reason is that the destructor is tied to the object's lifetime, which in games is often very different from when you want the resource + memory released. – congusbongus Mar 10 '14 at 03:05
  • `have fun trying to say, put network resource termination there` -> And if it does fail, then what? What kind of handling could you do? – dascandy Mar 10 '14 at 08:31
2

If your object has been allocated using placement new (i.e.: instantiated into a block of memory which wasn't allocated as a separate block by the system), then you need to call your object's destructor explicitly, as using the delete operator either explicitly or implicitly via a smart pointer would fail rather messily, as the system tried to deallocate that application-specified portion of the memory block.

You haven't given us anywhere near enough information to speculate about why the particular class you mention might be calling its destructor explicitly, it's not unreasonable to guess that this might be the reason, and the 'Shutdown()' call is just there to provide an interface around an explicit call to its destructor. (so that end-users of the engine don't get it into their heads to try to call 'delete' on the object. Presumably they've made the destructor private as well, to further enforce their intended destruction API.)

Trevor Powell
  • 1,154
  • 9
  • 20
  • OK thanks, that explains the explicit destructor call, but my question isn't specifically about that. I keep seeing similar pattern for objects allocated in a standard way, i.e. `Foo* f = new Foo; f->init(); // Use f; f->shutdown(); delete f;` The `shutdown()` method seems unnecessary. – jaho Mar 09 '14 at 23:27
  • 1
    I really can't answer that question in the abstract. There are several possible reasons to do that, including object pooling, reuse, reflection, the prohibition on using the 'this' pointer during a constructor, etc. – Trevor Powell Mar 09 '14 at 23:34
  • I take it, it's situation specific and also, for some reason, overused in many places, and I can easily ignore it until I encounter some specific scenario when it may be necessary. – jaho Mar 09 '14 at 23:53
  • For an example of where it seems overused you can check Rastertek DirectX tutorials where they use it in nearly every class, with no apparent benefit: http://www.rastertek.com/dx11tut03.html – jaho Mar 09 '14 at 23:55
0

I would use this practice (with DTORs) for several reasons giving me more control and flexibility:

// Passing by value to modify some things without affecting the original instances
somefunc(Foo f, Bar b)
  {
    // Behind the scene (during construction), the original instances allocated
    // and now share some resources with these copies

    // Modifying and testing here
    .
    .
    .
    // Implicit call to DTOR in 9 .. 8 .. 7
    // DTOR was called here implicitly before exiting the scope 
    // (I may not have actually wanted to free some shared resources)
  }

It would have been better if I had a separate function that I can call when I want to do so.. (though I am not sure these are relevant to the specific game company that produced the code you referenced in your question. Having a saperate function to actually handle freeing resources gives you more flexibility and control over how and when those resources are deallocated.

AturSams
  • 7,568
  • 18
  • 64
  • 98
  • I forgot this very important tidbit: [some say throwing an exception from a DTOR is a very bad idea](http://www.kolpackov.net/projects/c++/eh/dtor-1.xhtml). I hear many expert say that and also have asked people who worked with code from Microsoft and HP (none gaming software companies) and they often say that handling errors in an eternal function that handles the cleanup and freeing of resources is more accessible and easier to understand. Basically – AturSams Mar 12 '14 at 19:14
  • the idea is explained here: [why should dangerous destruction procedures be handled by a public method?](http://stackoverflow.com/questions/130117/throwing-exceptions-out-of-a-destructor) The idea is that if something goes poorly during the destruction process, the user may want more details and may want to handle it explicitly and try to fix it. The destructor should be a clean and compact solution. – AturSams Mar 12 '14 at 19:19