5

When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example:

interface Garble {
  int zork();
}

interface Gnarf extends Garble {
  /**
   * This is the same as calling {@link #zblah(0)}
   */
  int zblah();
  int zblah(int defaultZblah);
}

And then

abstract class AbstractGarble implements Garble {
  @Override
  public final int zork() { ... }
}

abstract class AbstractGnarf extends AbstractGarble implements Gnarf {
  // Here I absolutely want to fix the default behaviour of zblah
  // No Gnarf shouldn't be allowed to set 1 as the default, for instance
  @Override
  public final int zblah() { 
    return zblah(0);
  }

  // This method is not implemented here, but in a subclass
  @Override
  public abstract int zblah(int defaultZblah);
}

I do this for several reasons:

  1. It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy)
  2. I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it.

So the final keyword works perfectly for me. My question is:

Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

Lukas Eder
  • 211,314
  • 129
  • 689
  • 1,509

5 Answers5

4

Why is it used so rarely in the wild?

Because you should write one more word to make variable/method final

Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

Usually I see such examples in 3d part libraries. In some cases I want to extend some class and change some behavior. Especially it is dangerous in non open-source libraries without interface/implementation separation.

Lukas Eder
  • 211,314
  • 129
  • 689
  • 1,509
Stan Kurilin
  • 15,614
  • 21
  • 81
  • 132
  • +1 For separation of interfaces / implementation. That's OK sometimes, but makes type hierarchies very strict. In that case, `final` can really be a pain – Lukas Eder Jan 15 '11 at 14:22
  • +1 I notice sometimes I have strong desire to kill the author of library who have used that. 'private' methods are very similar. I think that it's not good to restrict the user of class hierarchy, even if author thinks that is may break something (there are exceptions of course, but every rule has exceptions :) ). User has to decide in his own, but should be warned, through. – Stas Jan 15 '11 at 14:39
  • @Stas, I'm not sure about that. Especially, when I design convenience methods whose behaviour depends on other methods, I want to use `final` extensively. I'll adapt my example for that... – Lukas Eder Jan 15 '11 at 14:45
  • 2
    Besides, @Stas, please don't kill me :) – Lukas Eder Jan 15 '11 at 14:59
  • @Lukas Eder, @Stas (Not me)) As Bloch said `API design is an art, not a science.` So there isn't single correct answer about using final) – Stan Kurilin Jan 15 '11 at 15:10
  • @Stas (You): You're right. But of course there are best practices! – Lukas Eder Jan 15 '11 at 15:11
  • 1
    @Lukas Eder: Correct) But mostly all best practices have their own usually small disadvantages. And API designer should understand this. Lets look at hypothetical example. suppose I want log all values that `zblah` returns. The easiest way will be override it, but I can't do it with your API. Of course this example is not good enough) – Stan Kurilin Jan 15 '11 at 15:24
  • @Stas Kurilin: Yes you can log those calls. Because `zblah(int)` is not final. Only `zblah()` is. But then again, `zork()` is always final. I can see your point. I haven't thought about logging before, but then again if you want to log calls to `zblah()` (if it's not final), you can either extend `AbstractGnarf`, or delegate calls to it from a `LoggingGnarf` wrapper implementing `Gnarf`, which is a lot nicer anyway – Lukas Eder Jan 15 '11 at 15:30
  • @Lukas Eder, Yeah. In production best solution will be wrapping in such cases. But it will produce more code. This is disadvantage) It's not critical when API developer provides Interface and skeleton AbstractClass otherways it can be real problem. – Stan Kurilin Jan 15 '11 at 15:40
3

I always use final when I write an abstract class and want to make it clear which methods are fixed. I think this is the most important function of this keyword.

But when you're not expecting a class to be extended anyway, why the fuss? Of course if you're writing a library for someone else, you try to safeguard it as much as you can but when you're writing "end user code", there is a point where trying to make your code foolproof will only serve to annoy the maintenance developers who will try to figure out how to work around the maze you had built.

The same goes to making classes final. Although some classes should by their very nature be final, all too often a short-sighted developer will simply mark all the leaf classes in the inheirance tree as final.

After all, coding serves two distinct purposes: to give instructions to the computer and to pass information to other developers reading the code. The second one is ignored most of the time, even though it's almost as important as making your code work. Putting in unnecessary final keywords is a good example of this: it doesn't change the way the code behaves, so its sole purpose should be communication. But what do you communicate? If you mark a method as final, a maintainer will assume you'd had a good readon to do so. If it turns out that you hadn't, all you achieved was to confuse others.

My approach is (and I may be utterly wrong here obviously): don't write anything down unless it changes the way your code works or conveys useful information.

biziclop
  • 48,926
  • 12
  • 77
  • 104
  • You're right. I'm writing library code, and I think I have to good reasons to communicate the semantics of `final` you're suggesting ... I'm hardly using `final` in every day application logic. The risk of misuse is too little. – Lukas Eder Jan 15 '11 at 15:10
2

I think it is not commonly used for two reasons:

  1. People don't know it exists
  2. People are not in the habit of thinking about it when they build a method.

I typically fall into the second reason. I do override concrete methods on a somewhat common basis. In some cases this is bad, but there are many times it doesn't conflict with design principles and in fact might be the best solution. Therefore when I am implementing an interface, I typically don't think deeply enough at each method to decide if a final keyword would be useful. Especially since I work on a lot of business applications that change frequently.

jzd
  • 23,473
  • 9
  • 54
  • 76
  • +1 Good answer. Although, I think using the final keyword does not *finalise* things to an extent where change wouldn't be possible anymore. But that depends on the project. In my case, though, I'm both designing the implementation **AND** the interfaces, similar to the Java Collections API... – Lukas Eder Jan 15 '11 at 14:21
  • Yes, changes are still possible. I guess in several of the objects I have been creating, overriding the concrete methods is commonly acceptable so the cases that there is a method that I need to add final to is rare, so it slips my mind more. – jzd Jan 15 '11 at 14:28
  • +1 Also, code templates in IDEs typically create everything public non-final. – WReach Jan 15 '11 at 15:25
2

Why is it used so rarely in the wild?

That doesn't match my experience. I see it used very frequently in all kinds of libraries. Just one (random) example: Look at the abstract classes in:

http://code.google.com/p/guava-libraries/

, e.g. com.google.common.collect.AbstractIterator. peek(), hasNext(), next() and endOfData() are final, leaving just computeNext() to the implementor. This is a very common example IMO.

The main reason against using final is to allow implementors to change an algorithm - you mentioned the "template method" pattern: It can still make sense to modify a template method, or to enhance it with some pre-/post actions (without spamming the entire class with dozens of pre-/post-hooks).

The main reason pro using final is to avoid accidental implementation mistakes, or when the method relies on internals of the class which aren't specified (and thus may change in the future).

Chris Lercher
  • 37,264
  • 20
  • 99
  • 131
  • +1 for the hint. That's exactly the use case I had in mind. I have just hardly ever seen it... – Lukas Eder Jan 15 '11 at 14:51
  • **Accepted answer** because it both answered my question **AND** proved me wrong about my assumption that `final` is not often used on methods **AND** proved me right about my using it :) Thanks, Chris – Lukas Eder Jan 15 '11 at 16:02
1

Why is it used so rarely in the wild?

Because it should not be necessary. It also does not fully close down the implementation, so in effect it might give you a false sense of security.

It should not be necessary due to the Liskov substitution principle. The method has a contract and in a correctly designed inheritance diagram that contract is fullfilled (otherwise it's a bug). Example:

interface Animal {
    void bark();
}

abstract class AbstractAnimal implements Animal{
   final void bark() {
       playSound("whoof.wav"); // you were thinking about a dog, weren't you?
   }
}

class Dog extends AbstractAnimal {
  // ok
}

class Cat extends AbstractAnimal() {
  // oops - no barking allowed!
}

By not allowing a subclass to do the right thing (for it) you might introduce a bug. Or you might require another developer to put an inheritance tree of your Garble interface right beside yours because your final method does not allow it to do what it should do.

The false sense of security is typical of a non-static final method. A static method should not use state from the instance (it cannot). A non-static method probably does. Your final (non-static) method probably does too, but it does not own the instance variables - they can be different than expected. So you add a burden on the developer of the class inheriting form AbstractGarble - to ensure instance fields are in a state expected by your implementation at any point in time. Without giving the developer a way to prepare the state before calling your method as in:

int zblah() {
    prepareState();
    return super.zblah();
}

In my opinion you should not close an implementation in such a fashion unless you have a very good reason. If you document your method contract and provide a junit test you should be able to trust other developers. Using the Junit test they can actually verify the Liskov substitution principle.

As a side note, I do occasionally close a method. Especially if it's on the boundary part of a framework. My method does some bookkeeping and then continues to an abstract method to be implemented by someone else:

final boolean login() {
    bookkeeping();
    return doLogin();
}
abstract boolean doLogin();

That way no-one forgets to do the bookkeeping but they can provide a custom login. Whether you like such a setup is of course up to you :)

extraneon
  • 23,575
  • 2
  • 47
  • 51
  • Interesting point! I didn't know about the Liskov substitution principle. However I do not entirely agree with you on your argumentation. The `Animal.bark()` method is misplaced and bad design. If design is bad, of course, implementations should not be closed, because users might need to create workarounds, such as `Cat.bark()`, in order to overcome the design flaws... If I'm careful about my design and provide *template methods* that can still be overridden, then I think I can have a lot of `login()` / `doLogin()` pairs as you suggested – Lukas Eder Jan 15 '11 at 14:55
  • @Lukas Eder I agree the Animal example was bad. I just couldn't think of a better example. It is however a real-life danger that requirements change beyond what you thought possible over time. Sadly, it happened to me a few times :( – extraneon Jan 15 '11 at 15:24