The general (theoretical) reason is separation of concerns: to call a function (whether a class member function or not) the compiler only needs visibility of the function's interface, based on its signature (name, return type, argument types). The caller does not need visibility of function definitions (and, if the definition is not (or cannot be) inlined for some reason, the outcome is often a program that fails to compile or link).
Two common practical reasons are to maximise benefits of separate compilation and to allow distribution of a library without source code.
Separate compilation (over-simplistically, placing subsets of source code in distinct source files) is a feature of C++, that has numerous benefits in larger projects. It involves having a set of separately compiled source files rather than throwing all the source for a program into a single source file. One benefit of separate compilation is that it enables incremental builds. Once all source files for a project have been compiled, the only source files that need to be recompiled are those that have changed (since recompiling a source file that hasn't changed produces a functionally equivalent object file). For large projects that are edited and rebuilt often, this allows incremental builds (recompiling changed source files only and relinking) instead of universal builds (which recompile and relink everything). Practically, even in moderately large projects (lets' say projects that consist of a few hundred source files) the incremental build time can be measured in minutes while the universal rebuild time can be measured in days. The difference between the two equates to unproductive time (thumb-twiddling waiting for a build to complete) by programmers. Programmer time is THE largest cost in significant projects, so unproductive programmer time is expensive.
In practice, in moderate or larger projects, function interfaces (function signatures) are quite stable (change relatively infrequently) while function definitions are unstable (change relatively frequently). Having a function definition in a header file means that header file is more likely to be edited as the program evolves. And, when a header file is edited, EVERY source file which includes that header file must be recompiled in order to rebuild (build management tools often work that way, because it's the least complicated way to detect dependencies between files in a project). That tends to result in larger build times - while the impacts of recompiling all source files that include a changed header are not as large as doing an universal build, the impacts can involve increasing the incremental build time from a few minutes to a few hours. Again, more unproductive time for programmers.
That's not saying that functions should never be inlined. It does mean that it is necessary to choose carefully which functions should be inlined (e.g. based on performance benefits identified using profiling, avoid inlining functions that will be updated regularly) rather than (as this question suggests) defaulting to inlining.
The second common reason for not inlining functions is for distribution of libraries. For commercial (e.g. protecting intellectual property) and other reasons it is often preferable to distribute a compiled library and header files (the minimum needed for someone to use a library in another project) without distributing the complete source code. Placing functions into a header file increases the amount of source file that is distributed.