1

Consider the case below

When we are using a C API's inside a class to create some data that are allocated in heap using malloc (e.g Object* create_obj()), and we have to call a certain method (void free_obj()) in the end of the class lifetime to free the memory manually.

When a language has a destructor, we can easily put the free_obj in the class destructor so the user does not have to call the free_obj manually and wait until the class get garbage collected.

My question

  • Why some garbage collected & OOP programming language (Java [Java has been deprecating it's finalize] and Ruby) doesn't have a destructor?

  • Isn't a destructor is necessary when you're interfacing a low level API like for the case above? If it's not necessary, what is the best practice to solve the problem below?

Andra
  • 1,282
  • 2
  • 11
  • 34
  • 1
    In Java, classes which hold resources that must be closed typically implement `AutoCloseable`. That interface has the `close()` method which the developer is responsible for calling at the appropriate time. It was rarely, if ever, a good idea to rely on `finalize`. And while `finalize` has been deprecated they did add `Cleaner`. But it's still best to avoid relying on `Cleaner` as much as possible. For objects which don't hold open resources, and thus don't implement `AutoCloseable`, there's really no reason to have a destructor—the GC will take care of it. – Slaw Aug 21 '20 at 10:40
  • Because those "OOP languages" you talk about are designed to be much higher level? Java runs on JVM which handles all memory managment for you so you don't have to. – Amongalen Aug 21 '20 at 10:41
  • 1
    If you use try-with-resources (from Java 7 onwards) and your component implements `AutoCloseable`, you don't need `finalize` – nullTerminator Aug 21 '20 at 10:44
  • In a garbage collected language like Java there is no guarantee about when (or if) an object will be freed, so freeing resources in a destructor wouldn't be very useful. Java does have finalise: https://stackoverflow.com/questions/2506488/when-is-the-finalize-method-called-in-java – tgdavies Aug 21 '20 at 10:49
  • 1
    This is a pointless question. In Java, you don’t use `malloc`, so you don’t have a need to put a corresponding `free` into a destructor. – Holger Aug 21 '20 at 13:44
  • @Holger yes, I know about that. But what I ask is in case you are creating a binding with C and when you call a certain C function that are using `malloc`. – Andra Aug 21 '20 at 16:29
  • 1
    That’s a rare corner case. Why should a language get designed around the rare corner cases? – Holger Aug 24 '20 at 08:17
  • @Holger Well for a simple example when someone wants to make an SDL binding, there's a lot of SDL binding implemented in many GC based programming language. In SDL you have to allocate and free the memory manually. I just want to know what's the best approach to solve this kind of problem, because I have seen some of the implementation are using "destructor" to free the memory. – Andra Aug 25 '20 at 16:26
  • 1
    When you “want to know what's the best approach to solve this kind of problem” you shouldn’t ask a “why” question. – Holger Aug 25 '20 at 16:28

2 Answers2

3

Languages like Java and Ruby have finalizers, but not destructors. The main reason is that deterministic destruction constrains the implementation in a way that the language designers did not want to do.

Many of the performance tricks that modern high-performance garbage collectors employ would not be possible with deterministic finalization. Ruby and Java do not even guarantee that an object will be collected at all. It is perfectly legal for a Ruby or Java implementation to never collect an object even if it is unreachable.

Even CPython, which has a very simple garbage collector cannot guarantee deterministic finalization. It only guarantees deterministic finalization for non-cyclic object graphs. And the Python community has made it very clear that this is a private internal implementation detail of CPython and not part of Python language semantics, meaning that other implementations (e.g. PyPy, IronPython, Jython) do not have to implement it and are thus free to implement much better garbage collectors.

Jörg W Mittag
  • 363,080
  • 75
  • 446
  • 653
1

Destructors are necessary in allocation based languages, but optional in GC languages like Ruby. Destructor patterns are not to be confused with garbage collection and are, as you said, representative of matching an object lifespan to a scope.

Objects live for a while and then the section of memory said object consumes is marked as available for future objects. Ruby offers two sets of memory: malloc heap and Ruby object heap. malloc heap does not release back to the os unless the memory is unused by Ruby at the end of gc. The latter (a subset of malloc heap) is where most Ruby objects live. The Ruby garbage collector directs its focus here and cleans up often, meaning destructors are, for the most part, unnecessary. Not every object will be collected but languages like Ruby do not guarantee that.

In Ruby variables reference an Object which means an Object is stored somewhere and variables only hold the Object id. If we called a destructor on such an object that has been collected or destructed by another variable it would return nil but possibly the same object id, which could cause issues at run time.

Ruby's define_finalizer is not common practice and developers are discouraged from using it. The method cannot refer to the object it is freeing as the callback is executed after the object is freed, so there's no guarantee it will be called. If a finalizer proc holds reference to self which would make it impossible for the object to be garbage collected, meaning it will never be collected.

benjessop
  • 1,729
  • 1
  • 8
  • 14