usually people when designing proper software architectures on c++ that also need to have great performance, enter into the dangerous game of premature optimization, but rather that doing optimization at the architecture level (which is a perfectly good and encouraged form of premature optimization) they do compromises at the code level, like avoiding virtual methods and interfaces altogether, low level hacks, etc.
some people avoids this by doing a practice called usually application inlining or unity builds which is basically generating one or two really big .cpp with all the headers and .cpp from the whole project included, and then compile it as a single translation unit. This approach is very reliable when it comes to inlining virtual methods (devirtualization) since the compiler does have everything to make the required optimizations
Question what drawbacks does have this approach regarding more "elegant & modern" methods like link-time optimization?