0

I've recently started learning about makefiles, and the video I'm watching includes a compilation if two source files and a header file. A class was defined in the header file, which was included in both source files. A method for the class was defined in one source file and was called in the second (main.cpp). Why would I ever need to have 2 source files (.c/.cpp)?

Serket
  • 3,785
  • 3
  • 14
  • 45
  • do you understand why you should seperate code into header and source? Once you do you will also know why you dont want to put everything in the header. – 463035818_is_not_an_ai Jul 03 '20 at 22:09
  • @idclev463035818 can you explain why I should separate into header and source? From my understanding, it's for organisational purposes. – Serket Jul 03 '20 at 22:12
  • 2
    A header is copied into the source file during preprocessing. The result is one mammoth file that's fed into the compiler. If everything is in headers, then every time you make a change, one file will be built. It will include the project's worth of headers and that one change causes ALL of the files to be recompiled. This gets very time consuming. – user4581301 Jul 03 '20 at 22:12
  • https://stackoverflow.com/questions/1305947/why-does-c-need-a-separate-header-file/1306069 – 463035818_is_not_an_ai Jul 03 '20 at 22:13
  • @user4581301 then what happens if I have multiple source files, but two of them have main functions. Also, could you please post all of this as an answer? Thanks – Serket Jul 03 '20 at 22:14
  • Headers contain an interface. The interface does not change much. What changes most of the time is details in the implementation files. With headers and implementation files, you only have to recompile the implementation files that changed and occasionally a whole bunch of stuff when the interface changes. – user4581301 Jul 03 '20 at 22:14
  • 3
    Read [How does the compilation/linking process work?](https://stackoverflow.com/questions/6264249/how-does-the-compilation-linking-process-work). When you're done, the rest of this should make make sense. If you have an identifier that exists more than once the linker will not know which is the one it should use. If you have two files with `main`, both files will compile, but the linker can't pick which one to use. – user4581301 Jul 03 '20 at 22:18
  • 1
    Putting actual methods and function implementations into header files creates the potential for multiple instances of the same method or function to exist when object files are linked together. That could dramatically increase compilation times for large code baselines as the same functions or methods have to be compiled many times. There are also probably all kinds of subtle issues raised by having multiple instances of a function or method available should such a scheme be used. – Andrew Henle Jul 03 '20 at 22:22

2 Answers2

2

Every time you make any change to a header file every file that includes it must be recompiled, but if you change a source file only that source file needs to be recompiled.

When working on a project of non-trivial size this can dramatically reduce compilation times. Since most of your work is typically done on the implementation side, where modifications to header files are less frequent than to source files, most recompilation will be localized.

You'll also want to split up your source into separate files for reasons of navigability. Working through a 2000+ line source file is a lot more hassle than 10 files around 200 lines each. This is even more important when version control kicks in and you're working on a team, as then merge conflicts are reduced.

Imagine if Chrome was just a singular .cpp file. Making even the most trivial of changes to it would require recompiling the whole thing, which even on a well-equipped machine is going to take 6-12 hours. Compiling a single source file and re-linking is, by comparison, on the order of minutes.

In practice you'll often have one class per source file with a corresponding header file. Functions are grouped together logically into sets, each in their own pair of header/source files.

tadman
  • 208,517
  • 23
  • 234
  • 262
2

The human mind can't hold very much information at a time, so we chop things up into smaller, logical and coherent pieces.

OK. So one main.cpp that includes the dozens or hundreds or thousands files in the program, all implemented in header files of a reasonable size, each covering one concept or aggregating more headers files should that one concept be too broad to be easily be described1 in a single header. Problem solved, right? Yup. But that's only one problem.

What about resource consumption?

It's helpful to read How does the compilation/linking process work? before continuing.

During preprocessing every include statement is replaced with the contents of the included file. The result is one mammoth file that's fed into the compiler. That takes up a lot of memory. Further, if one file includes everything as headers, then every time you make a change, no matter how small, this one file will need to be be built. It will include the project's worth of headers and that one change causes ALL of the files to be recompiled. This gets very time consuming.

Memory keeps getting cheaper, so constraining the first resource isn't as important as it was in the 1970s when all of this was being invented2. Still rears it's ugly head from time to time. That's why I cross compile on a big, fat PC rather than building code directly on a Raspberry Pi.

Time doesn't get cheaper. Never has, never will.

But if you follow the best practices and headers contain an interface (what it does) rather than an implementation (how it does it) you'll find that the header doesn't change much. What changes most of the time is the how-to details in the implementation files. A small change is confined to the one implementation file that provides that changed behaviour. Likely this is the only file that needs to be recompiled. This is a huge improvement from every file every time to one or two files every time.

On the rare occasions where the interface changes, well, you suck it up.

1 Because that's what code is: A description of behaviour. It's not a list of instructions for the computer to execute--that's the compiler output--it's a description of the behaviour of the program. The compiler's job is to turn your description onto the instructions.

2 This also explains why building in more recently created languages is much less complicated3. They don't have baggage left over from having to make things work in the glorious days of 1/2 K ram and a CPUs clocked in the kilohertz. They also learned from a lot of mistakes.

3 For the end user, anyway. The back end of a modern build system is some crazy code, man.

user4581301
  • 33,082
  • 7
  • 33
  • 54