3

I know the difference between override and new (or believe to do so anyways), and there are several questions describing the difference between the two, but my question is if there is a particular reason why C# defaults to the new behavior (with a warning), instead of defaulting to override?

public class Base
{
    virtual public string GetString() => "Hello from Base";
}

public class Child : Base
{
    public string GetString() => "Hello from Child";
}

...

var childAsBase = (Base)new Child();
Console.WriteLine(childAsBase.GetString());

...

c:\>dotnet run
child.cs(5,23): warning CS0114: 'Child.GetString()' hides inherited member 'Base.GetString()'.
To make the current member override that implementation, add the override keyword. 
Otherwise add the new keyword. [C:\IPutAllMyProjectsInMyRootFolder.csproj]
Hello from Base

I can think that it can be to get the same behavior whether the inherited method is marked virtual or not, but at the same time, declaring it virtual is saying "override me" so default to override instead seems reasonable to me.

Another reason that crossed my mind is the cost of using a virtual function table, but that seems like a scary reason in the sense that what I as a coder want the code to do should be more important than saving cpu-cycles. But perhaps back when the language was invented, that was not the case?

Fredrik Ljung
  • 1,445
  • 13
  • 28
  • 1
    As written, your question appears to ask for a subjective opinion on the language design. Objectively, this is the language behavior. Is there a different question you meant to ask? – maxwellb Oct 17 '17 at 15:29
  • 6
    Actually, the language designer has spoken about this: https://softwareengineering.stackexchange.com/questions/245393/why-was-c-made-with-new-and-virtualoverride-keywords-unlike-java which boils down to "This is an important design consideration, and I want you to make a conscious decision". – Matthew Watson Oct 17 '17 at 15:30
  • 1
    Is it subjective? Someone back in the day chose one over the other. If that was a coin toss, then the answer is no, but if there was a reason, I would love to know what it was. Perhaps it's a question better sent as an email to Anders Hejlsberg, but I hoped someone here knows the answer. :) – Fredrik Ljung Oct 17 '17 at 15:32
  • @dymanoid thanks, I missed that. :) – Fredrik Ljung Oct 17 '17 at 15:34
  • @MatthewWatson Thanks! Very interresting read, and link would serve as answer to the question. Although perhaps that mean it should be closed as not belonging on so? – Fredrik Ljung Oct 17 '17 at 15:37
  • Yeah, I'm not sure that would be a valid answer on this site. – Matthew Watson Oct 17 '17 at 15:42
  • 2
    The question is not subjective; it does not ask for an *opinion* about whether the design is good or bad, but rather, it asks *what the justifications of the decision were*. Those can be objectively known. The fact that only a few people might definitively know those facts doesn't make the question subjective. The problem with "why" questions is not that they're subjective, it's that they're *vague*, but this one is reasonably crisp. – Eric Lippert Oct 17 '17 at 18:05
  • A better reason to close the question is that it is a duplicate of https://stackoverflow.com/questions/3117838/why-do-we-need-the-new-keyword-and-why-is-the-default-behavior-to-hide-and-not-o/3118480#3118480 – Eric Lippert Oct 20 '17 at 21:22

1 Answers1

9

When a C# language design decision that involves type hierarchies seems unusual to you, a good technique is to ask yourself the question "what would happen if someone changed my base class without telling me?" C# was carefully designed to mitigate the costs of brittle base class failures, and this is one.

Let's first consider the case where a shadowing method has the override keyword.

This indicates to the compiler that the derived class author and the base class author are cooperating. The base class author made an overridable method, which is a super dangerous thing to do. An overridable method means that you cannot write a test case which tests all possible behaviours of that method! Overrideable-ness of a method must be designed in, and so you are required to say that a method is virtual (or abstract).

If we see an override modifier then we know that both the base class and derived class authors are taking responsibility for the correctness and safety of this dangerous extension point, and have successfully communicated with each other to agree upon the contract.

Let's next consider the case where a shadowing method has the new keyword. Again, we know that the derived class author has examined the base class, and has determined that the shadowed method, whether virtual or not, does not meet the needs of the derived class consumers, and has deliberately made the dangerous decision to have two methods that have the same signature.

That then leaves us with the situation where the shadowing method has neither override nor new. We have no evidence that the author of the derived class knows about the method in the base class. In fact we have evidence to the contrary; if they knew about a virtual base class method, they would have overridden it to match the contract of the virtual method, and if they knew about a non-virtual base class method then they would have deliberately made the dangerous decision to shadow it.

How could this situation arise? Only two ways come to mind.

First, the derived class author has insufficiently studied their base class and is ignorant of the existence of the method they've just shadowed, which is a horrible position to be in. The derived class inherits the behaviours of the base class and can be used in scenarios where the invariants of the base class are required to be maintained! We must warn ignorant developers that they are doing something extremely dangerous.

Second, the derived class is recompiled after a change to the base class. Now the derived class author is not ignorant of the base class as it was original written, and as they designed their derived class, and as they tested their derived class. But they are ignorant of the fact that the base class has changed.

Again, we must warn ignorant developers that something has happened that they need to make an important decision about: to override if possible, or to acknowledge the hiding, or to rename or delete the derived class method.

This then justifies why a warning must be given when a shadowing method is marked neither new nor override. But that wasn't your question. Your question was "why default to new?"

Well, suppose you are the compiler developer. Here are your choices when the compiler is faced with a shadowing method that lacks new and override:

  • Do nothing; give no warning or error, and choose a behaviour. If the code breaks due to a brittle base class failure, too bad. You should have looked at your base class more carefully. Plainly we can do better than this.

  • Make it an error. Now a base class author can break your build by changing a member of a base class. This is not a terrible idea, but we must now weigh the cost of desired build breaks -- because they found a bug -- against the costs of unwanted build breaks -- where the default behaviour is desired -- against the cost of ignoring the warning accidentally and introducing a bug.

This is a tricky call and there are arguments on all sides. Introducing a warning is a reasonable compromise position; you can always turn on "warnings are errors", and I recommend that you do.

  • Make it a warning, and make it override if the base method is overridable, and shadowing if the base method is not overridable. Not only is this inconsistent, but we've just introduced another kind of brittle base class failure. Do you see it? What if the base class author changes their method from non-virtual to virtual, or vice-versa? That would cause accidentally-shadowing methods to change from overriding to shadowing, or vice-versa.

But let's leave that aside for the moment. What are the other consequences of automatically overriding if possible? Remember, the premise of the scenario is that the overriding is accidental and the derived class author is ignorant of the implementation details, the invariants, and the public surface area of the base class.

Automatically changing behaviour of all callers of the base class method seems insanely dangerous compared with the danger of changing the behaviours of only those callers that call the shadowing method via a receiver of the derived type.

  • Make it a warning, and default to shadowing, not overriding. This choice is safer in general, it avoids a second kind of brittle base failure, it avoids build breaks, callers of the method with base class receivers get the behaviour that their test cases expect, and callers of the method with derived class receivers get the behaviour they expect.

All design choices are the results of carefully weighing many mutually incompatible design goals. The designers of C# were particularly concerned with large teams working on versioned software components where base classes could change in unexpected ways, and teams might not communicate those changes well to each other.

Another reason that crossed my mind is the cost of using a virtual function table, but that seems like a scary reason in the sense that what I as a coder want the code to do should be more important than saving cpu-cycles. But perhaps back when the language was invented, that was not the case?

Virtual methods introduce costs; the obvious cost is the extra table jump at runtime and the code needed to get to it. There are also less obvious costs like: the jitter can't inline virtual calls to non-sealed methods, and so on.

But as you note, the reason to make non-virtual the default is not primarily for performance. The primary reason is that virtualization is incredibly dangerous and needs to be carefully designed in. The invariants that must be maintained by derived classes that override methods need to be documented and communicated. Proper design of type hierarchies is expensive, and making it opt-in lowers costs and increases safety. Frankly, I wish sealed was the default as well.

Eric Lippert
  • 647,829
  • 179
  • 1,238
  • 2,067
  • Just so that the reason behind the statement is clear to all readers, experts *and beginners*; can you add a brief summary of ***why*** making methods *overridable* is, quote: *"A super dangerous thing to do!"™* --- You already *kinda* did with the examples, but IMHO they show what's bad without doing a very good job at explaining *why* it's bad, which again IMHO is far more important. – CosmicGiant Oct 17 '17 at 18:09
  • 1
    @AlmightyR: I think I was pretty clear. A virtual method is *not testable* because all virtual calls via the base class receiver can do *anything*. And a virtual method *requires that invariants of the base class be maintained by the developers of unknown quality who built the derived class*, so they are fragile. Untestable and fragile sounds pretty dangerous to me. – Eric Lippert Oct 17 '17 at 18:16
  • I think you were clear too; for developers that know *why* *"not testable"* and *"requires that invariants of the base class be maintained by the developers of unknown quality"* == ***B.A.D.*** (**B**asically **A**lways **D**isastrous). But I'm not so sure newbies will fully understand what you mean tho. I think you can do better. But it's your answer; you answer as you please. – CosmicGiant Oct 17 '17 at 18:29
  • 1
    Thank you for taking the time to write that down. Very interesting, and I'm sure I'm smarter for reading it, but I feel more ignorant. Yet again I find out how little I know. :) – Fredrik Ljung Oct 18 '17 at 08:17
  • @FredrikLjung: You're welcome. Good developers identify their errors and think about how they could have been avoided; maybe via better education, or better communication processes, and so on. But programming language designers must think about how errors can be avoided *by the design of the language itself*. Line of business developers aren't in the habit of thinking about that because historically they were seldom in a position to affect the design of their own tools. If this interests you, join the discussion on github and influence the direction of C# and VB! – Eric Lippert Oct 18 '17 at 12:59
  • 3
    As a simple example: Suppose a derived class adds a method, `protected void setAngle(double radians)`. Years later, a new version of the base class adds a method, `protected virtual void setAngle(double degrees)`. So, we want everything to work and we'd also like some sort of warning that we're in an awful position (though fixing it might be impractical; the derived class might have third parties who are relying on its current behavior!). We don't want to break behavior and we don't want to break compilation. Either type of problem risks users choosing not to upgrade. – Brian Oct 18 '17 at 14:09
  • @Brian: That's a great example; I'm totally stealing that. – Eric Lippert Oct 18 '17 at 14:22