5

I read a lot about "program to interfaces" and "inversion of control" during the last days. Mostly in the context with the Java language. My question is if it is also a common practice in C++ development. What are the advantages? What are the disadvantages?

Is it worth to apply to small projects (like 15-20 classes)?

tyrondis
  • 3,364
  • 7
  • 32
  • 56
  • What do you mean by "interfaces" here? Do you mean an interface in the sense of an object in the language (C++ has abstract base classes, but these aren't used as much as Java interfaces) or an interface in the sense of a defined way to access functionality like an API? If the latter, the usual practice in C and C++ is to put the interface in the header files and most or all of the implementations in the source files. – David Thornley Oct 27 '10 at 21:42
  • Speaking of C++ I was talking about an abstract base class with pure virtual methods. – tyrondis Oct 27 '10 at 21:50

10 Answers10

9

Yes, it is fairly common, but not in the form you might expect.

In Java, the interface is formalized and explicit, and programming to an interface means implementing that specific interface.

In C++, the same is done too, sometimes (although using abstract base classes rather than interfaces), but another common way it is done in C++ is with templates, where the interface is implicit.

For example, the standard library algorithms all work with the iterator "interface", except that no such interface is ever defined in code. It is a convention, and nothing more.

A valid iterator is required to expose certain functionality, and so, any type which exposes this functionality is an iterator. But it doesn't have to implement some kind of hypothetical IIterator interface like you would in Java.

The same is common in user code. You often write your code to accept a template parameter which might be anything that works. You implicitly define an interface through your use of the type: anything you require from it becomes part of this implicit interface that a type must satisfy in order to be usable.

The interface is never formalized in code, but you are still using it and programming against it.

jalf
  • 243,077
  • 51
  • 345
  • 550
  • This is absolutely the most on-the-target answer. One question though: Assume a C++ project is sensitive to performance, should static polymorphism has higher priority than dynamic polymorphism when it comes to "programming to an interface"? – h9uest May 22 '15 at 15:36
  • @h9uest mmmmaybe! Even if a project is sensitive to performance, it's still only perhaps 10% of the actual code in that project that's at all sensitive to performance. For most of the code, it just doesn't matter. Go for the approach that yields the simplest, most understandable code. Then at least it'll be easy to modify later, if and when you find out that performance is an actual problem. – jalf May 22 '15 at 17:19
  • I've been using the traditional approach(ABC that have pure virtual functions; pointers for dynamic polymorphism) for interface in C++, and it soon gets messy when the codebase expands, and the overhead of v-table lookups are introduced. I can't quite say it but I feel this isn't the "proper" C++ approach towards interface. One problem with static polymorphism using template: I no longer have a nice and clean header file which lists all functions on an interface; so when I provide an implementation, I need to go through the big template header to find the function signature. Any neat approach? – h9uest Jun 02 '15 at 10:48
  • 1
    @h9uest some people split the implemention out into ".inc" files or similar (which are then included from the header). And if the set of types the template is going to be instantiated with is known (and small) then you could define the member functions in the.cc file and explicitly instantiate the template there. But other than that, nope, no great solutions unfortunately. – jalf Jun 02 '15 at 11:55
7

The principles you speak of are generally applicable to any OO language. The basic tenet here is "loose coupling". A class that depends upon another class (contains an instance of it and calls methods on it as part of its own work) really only depends on a set of functionality the dependency provides. If the class defines a reference to a concrete class that it depends on, and you then want to replace the class with another, you not only have to develop the new class, but change the dependent class to depend on the new type. This is generally bad, because if your class is depended on by many other classes, you have to change code in multiple places, requiring you to test all the use cases involving those objects to ensure you haven't broken previously-working functionality.

Interfaces were designed to eliminate this, allowing multiple classes that are unrelated by ancestry to be used interchangeably based on a common, enforced set of methods that you know a class will implement. If, instead of depending on a class, you depended on an interface, any class implementing the interface would fulfill the dependency. That allows you to write a new class to replace an old one, without the class that uses it knowing the difference. All you have to modify is the code that creates the concrete implementation of the class filling the dependency.

This presents a quandary; sure, your class Depender can say it needs an IDoSomething instead of a DoerClass, but if Depender knows how to create a DoerClass to use as the IDoSomething, you haven't gained anything; if you want to replace DoerClass with BetterDoer, you must still change Depender's code. The solution is to give the responsibility for giving the class an instance of a dependency to a third party, a Creator. The class chosen for this depends on the context. If a class naturally has both Depender and DoerClass, it's the obvious choice to use to put them together. This is often the case when you have one class that has two helper dependencies, and one dependency needs the other as well. Other times you may create a Factory, which exists to provide the caller with an instance of a specific object, preferably with all dependencies hooked up.

If you have several interdependent classes, or dependencies many levels deep, you may consider an IoC framework. IoC containers are to Factories as Repositories are to DAOs; they know how to get you a fully-hydrated instance of ANY class requiring one or more dependencies, like a Repository can produce any fully-hydrated domain object from data in the DB. It does this by being told what concrete class should be used to fill a dependency in a certain situation, and when asked to provide a class, it will instantiate that class, providing instances of the required dependencies (and dependencies of dependencies). This can allow patterns where Class A depends on B, which depends on C, but A cannot know about C. The IoC framework knows about all three, and will instantiate a B, give it a new C, then give the B to a new A.

KeithS
  • 70,210
  • 21
  • 112
  • 164
  • 3
    It's only half the story though, in that it doesn't explain what might be the most common way in which people "program to an interface" in C++: through templates and static polymorphism. – jalf Oct 27 '10 at 23:45
  • So it is recommended for every class I introduce I write a interface/abstract base class even if I do not see any need for another implementation (yet)? – tyrondis Oct 28 '10 at 13:14
  • @Benjamin: It is recommended by SOLID practices. GRASP would have you do the same IF you foresaw a need, and to refactor if an unforeseen need came about. There are tools available that make refactoring in these situations easier. – KeithS Oct 28 '10 at 15:14
4

Absolutely! Encapsulation is a major part of OOP philosophy. By keeping the implementation separate from the interface of a class, your code becomes so much more versatile. For example, if I had a 'Vector' class, and I wanted to change the internal representation from an x and y pair to a length and direction (lets say it is for efficiency), then only changing a few member functions that handle the implementation is FAR easier than scouring through 100 source files for every class that is dependent on the classes implementation.

And yes, small projects could benefit as well. This concept is also useful when you have multiple classes that do the same thing (say rendering) but in a different way (perhaps for different platforms). By giving them all the same interface (or in C++, deriving them all from the same base class), then any object can switch between them with a simple substitution.

Alexander Rafferty
  • 6,134
  • 4
  • 33
  • 55
  • 2
    Your answer isn't incorrect, but I don't think it captures the full intent of the question. For example how would you mock a Vector class for testing? – Mark Ransom Oct 27 '10 at 21:27
1

I think there may be some confusion about terminology, since "interface" isn't a term defined by the C++ language. In the context of your question you obviously mean an abstract class specification which can be implemented by one or more concrete classes.

I wouldn't say it's common, but it's not uncommon either - maybe somewhere in between? Microsoft's COM is built on the concept.

See this question for more information about how it's done: How do you declare an interface in C++?

Community
  • 1
  • 1
Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
1

There's more to interfaces in C++ than in Java. The simple answer is yes, it is a common practice. Whether you should follow it in your small project depends on the case. Judge for each class, not for the project as a whole. As a general rule, if it's not broken don't fix it.

That said, in C++ you have two kinds of interfaces: that which supports runtime polymorphism and that which supports compile-time polymorphism. The runtime polymorphism is very similar to what you know from Java. The compile-time polymorphism comes from the use of templates.

The advantage of runtime polymorphism is that it usually results in a small binary, and it makes it easier for the compiler to produce meaningful error messages at compile-time. On the down side it also results in a slightly slower binary, because calls require one more dereference.

The advantage of compile-time polymorphism is that often your source is smaller, and the calls' runtime is optimized to as fast as it gets. On the other hand because the compiler needs to do more work compile-time tends to be slower. Often the compile-time becomes significantly slower, because templates are usually defined in header files, thus recompiled again and again for each compilation unit that depends on them.

wilhelmtell
  • 57,473
  • 20
  • 96
  • 131
1

I'd say that these concepts extend well beyond simply OO but are equally applicable to most, if not all, of the paradigms you might use in C++ including Generic Programming.

Edward Strange
  • 40,307
  • 7
  • 73
  • 125
0

Well, "program to interfaces" is used by anyone in any language that writes libraries.

A library user would not want the library interface changing all the time. Imagine if you had to rewrite your programs because the C library authors decided to redefine "printf". Or what if Java library decided to redefine the interface of toString?

So yes, "program to interfaces" is used in C++.

Zan Lynx
  • 53,022
  • 10
  • 79
  • 131
0

Interfaces became popular with Java, and currently with C#, before interfaces Object Oriented Programming already exists, the most complete language ever created (in my opinion) C++ did not grow up on the basis of interfaces. The primary need for creating interfaces came for solving the problem of having to have multiple inheritance (c++ allow multiple inheritance).

In C++ it is not an extended practice to define interfaces (but they can be simulated through pure abstract classes). Also there is a common practice to define a file containing declarations (.H) and other containing implementation (.CPP), probably this in conjunction with pure abstract clasess serve as an inspiration for creating interfaces in modern OO languages.

ArBR
  • 4,032
  • 2
  • 23
  • 29
  • I don not agree to your opinion that the primary need for interfaces was multiple inheritance! In my opinion interfaces were created to separate contract from implementation. As a interface is a pure contract multiple inheritance is allowed. Multiple inheritance from not pure contracts is considered a bad practice in c++ also. – sanosdole Oct 25 '11 at 12:38
  • @sanosdole - what I mean is that interfaces solve the problem of multiple inheritance of C++, as you said: multiple inheritance is a C++ bad practice. Inheritance of pure abstract clases in C++ is the equivalent of interface implementation in modern OO languages. – ArBR Oct 30 '11 at 02:02
0

Have a look a the design of the standard template library. For example msdn channel9 has some nice video tutorials, which explain the design (and MSFTs implementation) of the STL.

http://channel9.msdn.com/Shows/Going+Deep/C9-Lectures-Introduction-to-STL-with-Stephan-T-Lavavej

Nils
  • 13,319
  • 19
  • 86
  • 108
0

For "program to interface" as usual, it depends. If you use runtime polymorphism the disadvantage is execution speed and producing more classes. If you use compile time polymorphism for your interfaces the disadvantages are compile speed and complexity (concerning compile errors, debugging etc.).

Both share the big advantage of better maintainability. This achieved through encapsulation which yields better testability and a stronger separation of concerns. Adapting to this style of thinking reduces your problems to a smaller scale enclosed by the interfaces of other components.

This style of thinking is considered good practice in any programming language, even if you do not explicitly code the interfaces. (see also SOLID principles).

Inversion of Control is another beast for C++, as there are no good broad accepted standard frameworks. You should do it but have to manage it on your own. Best advice here: Stay away from statics and use "program to interfaces" and you should be fine for smaller projects.

sanosdole
  • 2,469
  • 16
  • 18