19

Possible Duplicate:
Why C# implements methods as non-virtual by default?

I'm speaking primarily about C#, .NET 3.5, but wonder in general what the benefits are of not considering everything "virtual" - which is to say that a method called in an instance of a child class always executes the child-most version of that method. In C#, this is not the case if the parent method is not labeled with the "virtual" modifier. Example:

public class Parent
{
    public void NonVirtual() { Console.WriteLine("Non-Virtual Parent"); }
    public virtual void Virtual(){ Console.WriteLine("Virtual Parent"); }
}

public class Child : Parent
{
    public new void NonVirtual() { Console.WriteLine("Non-Virtual Child"); }
    public override void Virtual() { Console.WriteLine("Virtual Child"); }
}

public class Program
{
    public static void Main(string[] args)
    {
        Child child = new Child();
        Parent parent = new Child();
        var anon = new Child();

        child.NonVirtual();           // => Child
        parent.NonVirtual();          // => Parent
        anon.NonVirtual();            // => Child
        ((Parent)child).NonVirtual(); // => Parent

        child.Virtual();              // => Child
        parent.Virtual();             // => Child
        anon.Virtual();               // => Child
        ((Parent)child).Virtual();    // => Child
    }
}

What exactly are the benefits of the non-virtual behavior observed above? The only thing I could think of was "What if the author of Parent doesn't want his method to be virtual?" but then I realized I couldn't think of a good use case for that. One might argue that the behavior of the class is dependent on how a non-virtual method operates - but then that seems to me there is some poor encapsulation going on, or that the method should be sealed.

Along these same lines, it seems like 'hiding' is normally a bad idea. After all, if a Child object and methods were created, it seems that it was done so for a specific reason to override the Parent. And if Child implements (and hides the parents) NonVirtual(), it is super easy to not get the what many might consider "expected" behavior of calling Child::NonVirtual(). (I say "expected" because it is sometimes easy to not notice 'hiding' is happening).

So, what are the benefits of not allowing everything to have "virtual" behavior? What is a good use-case for hiding a non-virtual parent if it's so easy to get unexpected behavior?

If anyone is curious as to why I pose this question - I was recently examining Castle Projects DynamicProxy library. The one main hurdle in using it is that any method (or property) you want to proxy has to be virtual. And this isn't always an option for developers (if we don't have control over the source). Not to mention the purpose of DynamicProxy is to avoid-coupling between your proxied class and whatever behavior you are trying to achieve with the proxy (such as Logging, or perhaps a Memoization implementation). And by forcing virtual methods to accomplish this what is instead achieved is very thin but obtuse coupling of DynamicProxy to all the classes it is proxying - Imagine, you have a ton of methods labeled virtual even though they are never inherited and overridden, so any other developer looking at the code might wonder "why are these even virtual? lets change them back".

Anyway, the frustration there led me to wonder what the benefits are of non-virtual, when it seems having everything virtual might have been more clear (IMO, I suppose) and perhaps(?) have more benefits.

EDIT: Labeling as community wiki, since it seems like a question that might have subjective answers

Community
  • 1
  • 1
Matt
  • 41,216
  • 30
  • 109
  • 147
  • This is a duplicate of a recent post, but I can't find it. It was more like, why doesn't C# make everything virtual by default, like Java does? – John Saunders Jun 30 '09 at 21:35
  • duplicate: http://stackoverflow.com/questions/530799/what-are-the-performance-implications-of-marking-methods-properties-as-virtual – Francis B. Jun 30 '09 at 21:36
  • This one? http://stackoverflow.com/questions/814934/why-c-implements-methods-as-non-virtual-by-default/814939 – Marc Gravell Jun 30 '09 at 21:36

6 Answers6

9

Because you don't want people overriding methods that you haven't designed the class for. It takes a significant effort to make sure it is safe to override a method or even derive from a class. It's much safer to make it non-virtual if you haven't considered what might happen.

Zifre
  • 26,504
  • 11
  • 85
  • 105
7

Eric Lippert covers this here, on method hiding

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • Good read, thanks. His GST / Grocery example was very informative. – Matt Jun 30 '09 at 21:44
  • 1
    I'm sorry, but I don't think this article answers the original question. Methods could have been virtual by default and still allow method hiding by using the `new` keyword, just as one does now, with the combination `virtual` on the base method `new` on the hiding one. Good article though – Amanda Tarafa Mas Aug 13 '13 at 14:22
  • @amanda yes, they *could* - but the very point of that article is that it would be an actively bad choice to do so. – Marc Gravell Aug 13 '13 at 15:40
  • 1
    @MarcGravell The point of the article, as I see it, is to show method hiding as a valid design choice, and I totally agree. But, that is not remotely the same as to advocate against default virtual methods, at least not so as long as one has the chance to hide a virtual method with `new` or to mark a base one as `final`. I like better the answer [here in the duplicate](http://stackoverflow.com/a/3069169/1122643) although I must say I would have gone for the default virtual methods and then allowing `new` and `final` functionality. This last part though is, obviously a matter of opinion. – Amanda Tarafa Mas Aug 14 '13 at 08:31
  • 1
    link-only answer and the link is now broken – Dave Cousineau Apr 09 '20 at 05:22
  • @DaveCousineau note that back in '09 our guidance on links was different, but you're not wrong; however: I've found an archive of it - updating – Marc Gravell Apr 09 '20 at 08:39
3

In many cases, it is crucial for a class to function properly that a given method has a specific behavior. If the method is overridden in an inherited class, there is no guarantee that the method will correctly implement the expected behavior. You should only mark a method virtual if your class is specifically designed for inheritance and will support a method with a different implementation. Designing for inheritance is not easy, there are many cases where incorrectly overriding a method will break the class's internal behavior

Thomas Levesque
  • 286,951
  • 70
  • 623
  • 758
2

Simple: The entire point in a class is to encapsulate some kind of abstraction. For example, we want an object that behaves as a text string.

Now, if everything had been virtual, I would be able to do this:

class MessedUpString : String{
   override void Trim() { throw new Exception(); }
}

and then pass this to some function that expects a string. And the moment they try to trim that string, it explodes.

The string no longer behaves as a string. How is that ever a good thing?

If everything is made virtual, you're going to have a hard time enforcing class invariants. You allow the class abstraction to be broken.

By default, a class should encapsulate the rules and behaviors that it is expected to follow. Everything you make virtual is in principle an extensibility hook, the function can be changed to do anything whatsoever. That only makes sense in a few cases, when we have behavior that is actually user-defined.

The reason classes are useful is that they allow us to ignore the implementation details. We can simply say "this is a string object, I know it is going to behave as a string. I know it will never violate any of these guarantees". If that guarantee can not be maintained, the class is useless. You might as well just make all data members public and move the member methods outside the class.

Do you know the Liskov Substitution Principle? Anywhere an object of base class B is expected, you should be able to pass an object of derived class D. That is one of the most fundamental rules of object-oriented programming. We need to know that derived classes will still work when we upcast them to the base class and pass them to a function that expect the base class. That means we have to make some behavior fixed and unchangeable.

jalf
  • 243,077
  • 51
  • 345
  • 550
  • Maybe the `MessedUpString` doesn't support Trimming? But `String` is sealed, so I believe it would be an exception in that imaginary world anyway. – vgru Jun 30 '09 at 21:49
  • 1
    If we had followed the advice of "everything should be virtual", then it wouldn't be sealed so it could be done. And the point here is that if it is derived from String, and String supports trimming, then the derived class *must also support trimming*. Otherwise it violates the LSP, and is a pain to work with. You must have run into the same problem in other situations. Say .NET's streams, which may or may not support seeking, so a function has no way to indicate that it expects "a stream that supports seeking", and is instead forced to throw unexpected exceptions at runtime – jalf Jun 30 '09 at 21:56
  • @jalf: If a class is open to outside inheritance, it will just about be possible to create derived classes which are just plain broken. Any non-broken implementation of `String` must support `Trim`, but it may be that the only way a particular implementation of `String` can support a non-broken `Trim` method is to have that method manipulate some members not found in `String`. – supercat Feb 15 '13 at 23:31
  • So based on this line of thinking, how do you honor the liskov substitution principle AND implement a proper adapter pattern? – Sinaesthetic Oct 18 '13 at 23:18
  • @Sinaesthetic I don't see the problem. Can you elaborate? – jalf Oct 19 '13 at 09:53
0

One key benefit of a non-virtual method is that it can be bound at compile time. That is the compiler can be sure which actual method is to be called when a method is used in code.

The actual method to be called cannot be known at compile time if that method is declared virtual, since the reference may actually point to a sub-type that has overriden it. Hence there is a small overhead at runtime when the actual method to call needs be resolved.

AnthonyWJones
  • 187,081
  • 35
  • 232
  • 306
  • 1
    But with all the arguments that "premature optimization is the root of all evil" should we care if it takes an extra instruction or two? (How much faster is compile time binding than runtime?) – Matt Jun 30 '09 at 21:36
  • I realized after I said that that "an instruction or two" across the entire runtime might add up. But I am curious - how much faster is it? – Matt Jun 30 '09 at 21:38
  • Actually, that isn't true. The C# compiler uses callvirt even for non-virtual instance methods. – Marc Gravell Jun 30 '09 at 21:38
  • If I remember correctly, the compiler emits a callvirt instruction even if the method is not virtual, so there is actually no performance benefit... – Thomas Levesque Jun 30 '09 at 21:39
  • 1
    Just because the compiler emits callvirt (for null checking purposes) doesn't mean the CLR can't optimise non-virtual method calls. I'm not saying it can either - just that the IL isn't the important bit. – Jon Skeet Jun 30 '09 at 21:49
  • @ThomasLevesque: The C# compiler outputs a CLI `callvirt` instruction, but CLI instructions are not machine code instructions. When a method is run for the first time, the CLI "just in time" compiler translates CLI instructions to instructions the host processor can execute directly. The CLI compiler will examine the target of a "callvirt" instruction and, if it is non-virtual, it will perform a dummy dereference of the object to ensure it's non-null (triggering an exception if null) and then perform a "normal" call. This is faster than a "traditional" null check, but slower than... – supercat Jan 16 '12 at 15:53
  • ...skipping the check would be. Personally, I dislike the fact that there's no way to instruct that a routine should be called directly, without "callvirt", since there are times an immutable class should behave as a value type with a usable default value. In COM, for example, as well as many languages predating .net, the initial values in an array of strings would be usable as empty strings. In .net languages, null values of type `String` are usable as empty strings in some contexts but not all (e.g. `concat("Hey", null)` is okay, but `arr[0].length` will fail if arr[0] is null. – supercat Jan 16 '12 at 15:57
  • If the sealed methods on String were invoked without callvirt, uninitialized string variables could behave as empty strings. – supercat Jan 16 '12 at 15:59
0

In a framework, a non-virtual member could be called and have a range of expected outputs, if the method was virtual the result of the method could be an expected result that wasn't tested for. Allowing methods to be non-virtual give expected results to framework actions.

Jeff Martin
  • 10,812
  • 7
  • 48
  • 74