107

Unlike Java, why does C# treat methods as non-virtual functions by default? Is it more likely to be a performance issue rather than other possible outcomes?

I am reminded of reading a paragraph from Anders Hejlsberg about several advantages the existing architecture is bringing out. But, what about side effects? Is it really a good trade-off to have non-virtual methods by default?

kevinarpe
  • 20,319
  • 26
  • 127
  • 154
Burcu Dogan
  • 9,153
  • 4
  • 34
  • 34
  • 1
    The answers that mention performance reasons overlook the fact that the C# compiler mostly compiles method calls to *callvirt* and not *call*. Which is why in C# it's not possible to have a method that behaves differently if the `this` reference is null. See [here](http://www.pvle.be/2008/11/extension-methods-and-null-objects/) for more info. – Andy Jul 26 '11 at 22:56
  • True! *call* IL instruction is mostly for calls made to static methods. – RBT Feb 04 '17 at 01:08
  • 1
    C#'s architect Anders Hejlsberg's thoughts [here](http://www.artima.com/intv/nonvirtualP.html) and [here](http://stackoverflow.com/questions/973240/why-are-methods-virtual-by-default-in-java-but-non-virtual-by-default-in-c). – RBT Feb 04 '17 at 01:14

10 Answers10

101

Classes should be designed for inheritance to be able to take advantage of it. Having methods virtual by default means that every function in the class can be plugged out and replaced by another, which is not really a good thing. Many people even believe that classes should have been sealed by default.

virtual methods can also have a slight performance implication. This is not likely to be the primary reason, however.

Mehrdad Afshari
  • 414,610
  • 91
  • 852
  • 789
  • 7
    Personally, I doubt that part about performance. Even for virtual functions, the compiler is very able to figure out where to replace `callvirt` by a simple `call` in the IL code (or even further downstream when JITting). Java HotSpot does the same. The rest of the answer is spot-on. – Konrad Rudolph May 03 '09 at 08:37
  • 5
    I agree performance is not at the forefront, but it *can* have performance implications too. It's probably very small. Certainly, it's not *the reason* for this choice, however. Just wanted to mention that. – Mehrdad Afshari May 03 '09 at 08:50
  • 74
    Sealed classes make me want to punch babies. – mxmissile Jun 30 '09 at 21:51
  • 6
    @mxmissile: me too, but good API design is hard anyway. I think sealed-by-default makes perfect sense for code you own. More often than not, the other programmer on your team _did not_ consider inheritance when they implemented that class you need, and sealed-by-default would convey that message quite well. – Roman Starkov Apr 29 '10 at 19:20
  • 1
    I think there is a large performance consideration here: virtual methods cannot be inlined unless you do a lot of fancy stuff to determine the actual runtime type profiles of the code. Hotspot specializes in just this sort of thing which is why it starts out by interpreting the code before using the information it gathers to compile it. A basic tenet of good API design is to program to interfaces rather than implementation classes which will always require virtual calls through the interface anyway. – Ramon Sep 12 '10 at 20:17
  • 4
    This has given me a TON of pain where I want to rewrite a simple property to do some extra stuff when its set (naturally, I COULD just write a new method - but the original base class doesn't have that method so it doesn't work so well since the code utilizes polymorphism in lots of lists). Trusting the programmer to mark the right stuff as virtual has implications for libraries that hurt the end user. – alternative Mar 12 '11 at 01:02
  • 52
    Many people are psychopaths too, doesn't mean we should listen to them. Virtual should be the default in C#, code should be extensible without having to change implementations or completely re-write it. People who overprotect their APIs often end up with dead or half-used APIs which is far worse than the scenario of someone misusing it. – Chris Nicola Jun 15 '11 at 20:12
  • 1
    Different languages are designed with different goals in mind. This design decision fits quite well with the design goals of C#. – Mehrdad Afshari Dec 24 '11 at 00:45
  • 3
    I would like to see this question/answer reviewed by the Java designers (Josh Bloch & Co.) What would they say? – kevinarpe Nov 05 '12 at 08:24
  • May be its better for consistency to have either both classes sealed and methods non-virtual or both classes non-sealed and methods virtual by default. Personally I rate all overriding a code smell, the very capability itself. – nawfal Jan 13 '13 at 20:51
  • 2
    Considering how many frameworks and methodologies require virtual properties and methods at this point (many ORM's, mocking frameworks, testing suites) that are considered "best practices" or modern, vetted patterns, it seems C# has definitely gone down the wrong path with this. – Jeremy Holovacs Apr 10 '13 at 12:52
  • 3
    `Sealed` and `virtual` are not the only options. `Sealed` means, "I thought about the design, and you're not allowed to extend this." `Virtual` means, "I thought about the design, and you're allowed to extend this." Not having `sealed` nor `virtual` means, "I didn't think about the design, so extend this at your own risk." You can override anything in C#, but some thing require more hurdles than others. Extending a class not meant for polymorphism is a hack, in any language, C# just makes you admit it. – xero Aug 26 '13 at 20:00
  • 1
    @xero: `Virtual` also means "I didn't think about whether anyone will ever want to mock, stub, intercept, log, or decorate this method call or any of its inheritants", which I think is a good thing. – Jay Sullivan Aug 12 '14 at 03:32
  • 2
    @notfed That's why you define an interface. And if the API developer didn't, you can easily create your own with a wrapper for the real thing. Honestly its not that hard. I also wonder why so many people complain about sealed classes; makes me think people are too quick to think the best answer is inheritance when it rarely is (favor composition over inheritance, right?). – Andy Nov 06 '14 at 23:40
  • You said: "Having methods virtual by default means that every function in the class can be plugged out and replaced by another, which is not really a good thing." Why? – user1944408 Mar 03 '15 at 11:08
  • @user1944408 Because it is difficult to do that correctly. When you are writing a class and need to assume that every piece of it is replaceable, there are few invariants you can rely on. – Mehrdad Afshari Mar 03 '15 at 19:34
  • If a piece of it is not replaceable then you should make a private method. If you need a public method then you can make it static or final. Static has a benefit that a user can see from the client code that that method is related to exact class and that it is not overridden by subclass. – user1944408 Mar 05 '15 at 17:48
  • 1
    @user1944408 "If you need a public method then you can make it static or final" The public/private argument is red herring. The question is regardless of public/private, which one should be the default: sealed/final or not. – Mehrdad Afshari Mar 05 '15 at 19:44
92

I'm surprised that there seems to be such a consensus here that non-virtual-by-default is the right way to do things. I'm going to come down on the other - I think pragmatic - side of the fence.

Most of the justifications read to me like the old "If we give you the power you might hurt yourself" argument. From programmers?!

It seems to me like the coder who didn't know enough (or have enough time) to design their library for inheritance and/or extensibility is the coder who's produced exactly the library I'm likely to have to fix or tweak - exactly the library where the ability to override would come in most useful.

The number of times I've had to write ugly, desperate work-around code (or to abandon usage and roll my own alternative solution) because I can't override far, far outweighs the number of times I've ever been bitten (e.g. in Java) by overriding where the designer might not have considered I might.

Non-virtual-by-default makes my life harder.

UPDATE: It's been pointed out [quite correctly] that I didn't actually answer the question. So - and with apologies for being rather late....

I kinda wanted to be able to write something pithy like "C# implements methods as non-virtual by default because a bad decision was made which valued programs more highly than programmers". (I think that could be somewhat justified based on some of the other answers to this question - like performance (premature optimisation, anyone?), or guaranteeing the behaviour of classes.)

However, I realise I'd just be stating my opinion and not that definitive answer that Stack Overflow desires. Surely, I thought, at the highest level the definitive (but unhelpful) answer is:

They're non-virtual by default because the language-designers had a decision to make and that's what they chose.

Now I guess the exact reason that they made that decision we'll never.... oh, wait! The transcript of a conversation!

So it would seem that the answers and comments here about the dangers of overriding APIs and the need to explicitly design for inheritance are on the right track but are all missing an important temporal aspect: Anders' main concern was about maintaining a class's or API's implicit contract across versions. And I think he's actually more concerned about allowing the .Net / C# platform to change under code rather than concerned about user-code changing on top of the platform. (And his "pragmatic" viewpoint is the exact opposite of mine because he's looking from the other side.)

(But couldn't they just have picked virtual-by-default and then peppered "final" through the codebase? Perhaps that's not quite the same.. and Anders is clearly smarter than me so I'm going to let it lie.)

mwardm
  • 1,953
  • 15
  • 19
  • 4
    Could not agree more. Very frustrating when consuming a Third Party api and wanting to override some behaviour and not being able to. – Andy Jul 26 '11 at 22:35
  • 4
    I enthusiastically agree. If you're going to publish an API (whether internally in a company or to the external world), you really can and should make sure your code is designed for inheritance. You should be making the API good and polished if you're publishing it to be used by many people. Compared to the effort needed for good overall design (good content, clear use cases, testing, documentation), designing for inheritance really isn't too bad. And if you're not publishing, virtual-by-default is less time-consuming, and you can always fix the small portion of cases where it's problematic. – Gravity Sep 15 '11 at 06:08
  • 2
    Now if there was only a editor feature in Visual Studio to automatically tag all methods/properties as `virtual`... Visual Studio add-in anyone? – kevinarpe Nov 05 '12 at 08:01
  • 1
    While I wholeheartedly agree with you, this isn't exactly an answer. – Chris Morgan Dec 03 '12 at 06:54
  • 1
    @Chris Morgan: Oops - true! I guess that I (and everyone who liked my answer) must have arrived here with some frustration to vent and never really noticed. Your comment has been nagging at the perfectionist in me for the last few days though, so I'm going to add one (that took me way longer to put together than I ever wanted). – mwardm Dec 08 '12 at 01:20
  • 3
    Considering modern development practices like dependency injection, mocking frameworks, and ORMs, it seems clear that our C# designers missed the mark a bit. It's very frustrating to have to test a dependency when you cannot override a property by default. – Jeremy Holovacs Apr 10 '13 at 13:00
  • +1 for the link you provided. The rest of your answer is more about writing an essey. – My-Name-Is Jan 09 '14 at 07:20
  • A good example of where *desperate workarounds* have been done is the case of C#'s `Moq` versus Java's `Mockito` – Jay Sullivan Aug 12 '14 at 03:34
  • 1
    Anders may be smarter than all of us, but he's still fallible as this question shows. – Ian Newson Nov 30 '14 at 11:12
  • While I disagree with your viewpoint, I appreciate your thoughtful answer and the time you took to research the topic and link for posterity. Thank you! – Aluan Haddad Oct 06 '16 at 16:24
  • I am a C# fan. I don't like Java for certain reasons maybe because I have worked mostly in C#. However, I must say that with current TDD trend, if every method of a class is virtual then it helps to mock a method easily even if Code is not written keeping TDD in mind. – Parag Meshram Feb 24 '17 at 06:35
  • 100% agreed. Every time I go to extend my own class to override a method (for testing or whatever), I'm thinking... I can override things in C#.... I literally write OVERRIDE on the subclass' method, only to have the C# compiler say:"Are you sure? I need to know you're sure about this... if you could just write 'virtual' on the base class, that'd be great." And if you didn't write the class, you have to decompile/alter/rebuild it to make methods virtual. Stupid. It's inconsistent to allow classes to be overridden by default, but not methods, especially when calls are 'callvirt' by default. – Triynko Dec 20 '17 at 16:28
19

Because it's too easy to forget that a method may be overridden and not design for that. C# makes you think before you make it virtual. I think this is a great design decision. Some people (such as Jon Skeet) have even said that classes should be sealed by default.

Zifre
  • 26,504
  • 11
  • 85
  • 105
12

To summarize what others said, there are a few reasons:

1- In C#, there are many things in syntax and semantics that come straight from C++. The fact that methods where not-virtual by default in C++ influenced C#.

2- Having every method virtual by default is a performance concern because every method call must use the object's Virtual Table. Moreover, this strongly limits the Just-In-Time compiler's ability to inline methods and perform other kinds of optimization.

3- Most importantly, if methods are not virtual by default, you can guarantee the behavior of your classes. When they are virtual by default, such as in Java, you can't even guarantee that a simple getter method will do as intended because it could be overridden to do anything in a derived class (of course you can, and should, make the method and/or the class final).

One might wonder, as Zifre mentioned, why the C# language did not go a step further and make classes sealed by default. That's part of the whole debate about the problems of implementation inheritance, which is a very interesting topic.

Trillian
  • 6,207
  • 1
  • 26
  • 36
  • 1
    I would have preferred classes sealed by default too. Then it would be consistent. – Andy Nov 06 '14 at 23:45
9

C# is influenced by C++ (and more). C++ does not enable dynamic dispatch (virtual functions) by default. One (good?) argument for this is the question: "How often do you implement classes that are members of a class hiearchy?". Another reason to avoid enabling dynamic dispatch by default is the memory footprint. A class without a virtual pointer (vpointer) pointing to a virtual table, is ofcourse smaller than the corresponding class with late binding enabled.

The performance issue is not so easy to say "yes" or "no" to. The reason for this is the Just In Time (JIT) compilation which is a run time optimization in C#.

Another, similar question about "speed of virtual calls.."

Community
  • 1
  • 1
Schildmeijer
  • 20,702
  • 12
  • 62
  • 79
  • I am a bit doubtful that virtual methods have performance implications for C#, because of the JIT compiler. That's one of the area where JIT can be better than offline compilation, because they can inline function calls which are "unknown" before runtime – David Cournapeau May 02 '09 at 14:42
  • 1
    Actually, I think it's more influenced by Java than C++ which does that by default. – Mehrdad Afshari Jul 22 '09 at 12:42
6

The simple reason is design and maintenance cost in addition to performance costs. A virtual method has additional cost as compared with a non-virtual method because the designer of the class must plan for what happens when the method is overridden by another class. This has a big impact if you expect a particular method to update internal state or have a particular behavior. You now have to plan for what happens when a derived class changes that behavior. It's much harder to write reliable code in that situation.

With a non-virtual method you have total control. Anything that goes wrong is the fault of the original author. The code is much easier to reason about.

JaredPar
  • 733,204
  • 149
  • 1,241
  • 1,454
  • This is a really old post but for a newbie who is writing his first project I constantly fret over unintended/unknown consequences of the code I write. It's very comforting knowing that non-virtual methods are totally MY fault. – trevorc May 10 '12 at 14:33
2

If all C# methods were virtual then the vtbl would be much bigger.

C# objects only have virtual methods if the class has virtual methods defined. It is true that all objects have type information that includes a vtbl equivalent, but if no virtual methods are defined then only the base Object methods will be present.

@Tom Hawtin: It is probably more accurate to say that C++, C# and Java are all from the C family of languages :)

Thomas Bratt
  • 48,038
  • 36
  • 121
  • 139
  • 1
    Why would it be a problem for the vtable to be much bigger? There's only 1 vtable per class (and not per instance), so its size doesn't make a whole lot of difference. – Gravity Sep 15 '11 at 05:47
1

Coming from a perl background I think C# sealed the doom of every developer who might have wanted to extend and modify the behaviour of a base class' thru a non virtual method without forcing all users of the new class to be aware of potentially behind the scene details.

Consider the List class' Add method. What if a developer wanted to update one of several potential databases whenever a particular List is 'Added' to? If 'Add' had been virtual by default the developer could develop a 'BackedList' class that overrode the 'Add' method without forcing all client code to know it was a 'BackedList' instead of a regular 'List'. For all practical purposes the 'BackedList' can be viewed as just another 'List' from client code.

This makes sense from the perspective of a large main class which might provide access to one or more list components which themselves are backed by one or more schemas in a database. Given that C# methods are not virtual by default, the list provided by the main class cannot be a simple IEnumerable or ICollection or even a List instance but must instead be advertised to the client as a 'BackedList' in order to ensure that the new version of the 'Add' operation is called to update the correct schema.

Travis
  • 552
  • 1
  • 8
  • 14
  • True, virtual methods by default makes job easier, but does it make sense? There are many things you could employ to make life easier. Why not just public fields in classes? Altering its behaviour is too easy. In my opinion everything in language should be strictly encapsulated, rigid and resilient to change by default. Change it only if required. Just being lil' philosophical abt design decisions. Continued.. – nawfal Jan 13 '13 at 20:56
  • ...To talk on current topic, Inheritance model should be used only if `B` is `A`. If `B` requires something different from `A` then its not `A`. I believe overriding as a capability in the language itself is a design flaw. If you need a different `Add` method, then your collection class is not a `List`. Trying to tell it is is faking. The right approach here is composition (and not faking). True the whole framework is built on overriding capabilities, but I just dont like it. – nawfal Jan 13 '13 at 21:01
  • I think I understand your point, but on the concrete example provided: 'BackedList' could just implement the interface 'IList' and the client only knows about the interface. right? am i missing something? however I do understand the broader point you are trying to make. – Vetras Apr 16 '20 at 14:59
0

Performance.

Imagine a set of classes that override a virtual base method:

class Base {
   public virtual int func(int x) { return 0; }
}

class ClassA: Base {
   public override int func(int x) { return x + 100; }
}

class ClassB: Base {
   public override int func(int x) { return x + 200; }
}

Now imagine you want to call the func method:

   Base foo;
   //...sometime later...
   int x = foo.func(42);

Look at what the CPU has to actually do:

    mov   ecx, bfunc$ -- load the address of the "ClassB.func" method from the VMT
    push  42          -- push the 42 argument
    call  [eax]       -- call ClassB.func

No problem? No, problem!

The assembly isn't that hard to follow:

  1. mov ecx, foo$: This needs to reach into memory, and hit the part of the object's Virtual Method Table (VMT) to get the address of the overridden foo method. The CPU will begin the fetch of the data from memory, and then it will continue on:
  2. push 42: Push the argument 42 onto the stack for the call to the function. No problem, that can run right away, and then we continue to:
  3. call [ecx] Call the address of the ClassB.func function. ← !!!

That's a problem. The address of ClassB.func function has not been fetched from the VMT yet. This means that the CPU doesn't know where the go to next. Ideally it would follow a jump and continue spectatively executing instructions as it waits for the address of ClassB.func to come back from memory. But it can't; so we wait.

If we are lucky: the data already is in the L2 cache. Getting a value out of the L2 cache into a place where it can be used is going to take 12-15 cycles. The CPU can't know where to go next without having to wait for memory for 12-15 cycles.

ℂℙ -

Our program is stuck doing nothing for 12-15 cycles.

The CPU core has 7 execution engines. The main job of the CPU is keeping those 7 pipelines full of stuff to do. That means:

  • JITing your machine code into a different order
  • Starting the fetch from memory as soon as possible, letting us move on to other things
  • executing 100, 200, 300 instructions ahead. It will be executing 17 iterations ahead in your loop, across multiple function call and returns
  • it has a branch predictor to try to guess which way a comparison will go, so that it can continue executing ahead while we wait. If it guesses wrong, then it does have to undo all that work. But the branch predictor is not stupid - it's right 94% of the time.

Your CPU has all this power, and capability, and it's just STALLED FOR 15 CYCLES!?

This is awful. This is terrible. And you suffer this penalty every time you call a virtual method - whether you actually overrode it or not.

Our program is 12-15 cycles slower every method call because the language designer made virtual methods opt-out rather than opt-in.

This is why Microsoft decided to not make all methods virtual by default: they learned from Java's mistakes.

Someone ported Android to C#, and it was faster

In 2012, the Xamarin people ported all of Android's Dalvik (i.e. Java) to C#. From them:

Performance

When C# came around, Microsoft modified the language in a couple of significant ways that made it easier to optimize. Value types were introduced to allow small objects to have low overheads and virtual methods were made opt-in, instead of opt-out which made for simpler VMs.

(emphasis mine)

Ian Boyd
  • 246,734
  • 253
  • 869
  • 1,219
  • They do have a very efficient map. *"You want to call function `x`? Here is its address."* The problem **is** a map. The fact that you have to have a map at all costs you 2 cpu cycles - because i have to `JMP` to that address i just read. The other problem with a map is that it has to exist in memory, which means you have to read the address of that function from memory. Reading from memory, if you're **really** lucky can take 32 cpu cycles. So why waste 32 cpu cycles, or 2 cpu cycles, when you can waste 0 cpu cycles. – Ian Boyd May 20 '22 at 13:17
0

It is certainly not a performance issue. Sun's Java interpreter uses he same code to dispatch (invokevirtual bytecode) and HotSpot generates exactly the same code whether final or not. I believe all C# objects (but not structs) have virtual methods, so you are always going to need the vtbl/runtime class identification. C# is a dialect of "Java-like languages". To suggest it comes from C++ is not entirely honest.

There is an idea that you should "design for inheritance or else prohibit it". Which sounds like a great idea right up to the moment you have a severe business case to put in a quick fix. Perhaps inheriting from code that you don't control.

Tom Hawtin - tackline
  • 145,806
  • 30
  • 211
  • 305
  • Hotspot is forced to go to huge lengths to do that optimisation, precisely because all methods are virtual by default, which has a huge performance impact. CoreCLR is able to achieve similar performance whilst being far simpler – Yair Halberstadt Nov 27 '18 at 07:55
  • @YairHalberstadt Hotspot needs to be able to back out compiled code for a variety of other reasons. It's years since I've looked at the source, but the difference between `final` and effectively `final` methods is trivial. It's also worth noting that it can do bimorphic inlining, that is inline methods with two different implementations. – Tom Hawtin - tackline Nov 27 '18 at 17:16