9

We have a lot of prebuilt libraries (via CMake mostly), built using Visual Studio 2017 v141. When we try to use these against a project using Visual STudio 2019 v142 we see errors like:

Error C1047 The object or library file ‘boost_chrono-vc141-mt-gd-x32-1_68.lib’ was created by a different version of the compiler than other objects...

On the other hand, we also use pre-compiled .libs from 3rd-party vendors which are over a decade old and these have worked just fine when linked against our codebase.

What determines whether a library needs to be rebuilt, and why can some ancient libraries still be used when others that are only one version behind cannot?

Mr. Boy
  • 60,845
  • 93
  • 320
  • 589
  • 2
    That depends on the compiler options (/GL or /LTCG). See [here](https://learn.microsoft.com/en-us/cpp/error-messages/compiler-errors-1/fatal-error-c1047?view=msvc-160) and [here](https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-160). – 1201ProgramAlarm Jul 21 '21 at 15:47
  • 3
    If these decade old libraries were `c` only or did not expose any of the standard library in the public API it could explain this. Related: [https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-160](https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-160) – drescherjm Jul 21 '21 at 16:02
  • 1
    the rules are also different for static and shared libraries – Alan Birtles Jul 21 '21 at 16:44
  • 1
    Beware, when your program has parts built by several different MS compilers, it will also have several *different and incompatible* MS runtimes linked to it. It is really dangerous. If you stick with C (not C++) data at interface boundaries, you *might* be OK, but boost is inherently not like that. Best practice? Always rebuild everything you can. – n. m. could be an AI Jul 30 '21 at 19:54
  • Normally the C’s program building process involves four stages - processing, compilation, assembly, linking. - a toolset is a set of different tools that does those steps, and maybe some pre-* or post-* steps.. which can have changes in implementations when changing the version of a toolset or a toolchain.. – SaleemKhair Aug 07 '21 at 13:25

3 Answers3

5

ABI incompatibilities could cause some issues. Even though the C++ standard calls for objects such as std::vector and std::mutex and that they need to have specific public/protected members, how these classes are made is left to the implementation.

In practice, it means that nothing prevents the GNU standard library from having their data fields in another orders than the LLVM standard library, or having completely different private members.

As such, if you try to use a function from a library built with the LLVM libc++ by sending it a GNU libstdc++ vector it causes UB. Even on the same standard library, different versions could have changed something and that could be a problem.

To avoid these issues, popular C++ libraries only use C data structures in their ABIs since (at least for now) every compiler produces the same memory layout for a char*, an int or a struct.

These ABI issues can appears in two places:

  • When you use dynamic libraries (.so and .dll files) your compiler probably won't say anything and you'll get undefined behavior when you call a function of the library using incompatible C++ objects.
  • When you use static libraries (.a and .lib files) I'm not really sure, I'm guessing it could either print an error if it sees there's gonna be a problem or successfully compile some Frankenstein monster of a binary that will behave like the above point
Tzig
  • 726
  • 5
  • 20
4

I will try to answer some integral parts, but be aware this answer could be incomplete. With more information from peers we will maybe be able to construct a full answer!

The simples kind of linking is linking towards a C library. Since there is no concept of classes and overloading function names, the compiler creators are able to create entry points to functions by their pure name. This seems to be pretty much quasi-standardized since, I myself, haven't encountered a pure C library not at least linkable to my projects. You can select this behaviour in C++ code by prepending a function declaration with extern "C". (This also makes it easy to link against a library from C# code) Here is a detailed explanation about extern "C". But as far as I am aware this behaviour is not standardized; it is just so simple - it seems - there is just one sane solution.

Going into C++ we start to encounter function, variable and struct names repeating. Lets just talk about overloaded functions here. For that compiler creators have to come up with some kind of mapping between void a(); void a(int x); void a(char x); ... and their respective library representation. Since this process also is not standardized (see this thread) and this process is far more complex than the 1 to 1 mapping of C, the ABIs of different compilers or even compiler versions can differ in any way.

Now given two compilers (or linkers I couldn't find a resource wich specifies wich one exactly is responsible for the mangling but since this process is not standardized it could be also outsourced to cthulhu) with different name mangling schemes, create following function entry points (simplified):

compiler1
_a_
_a_int_
_a_char_

compiler2
_a_NULL_
_a_++INT++_
_a_++CHAR++_

Different linkers will not understand the output of your particular process; linker1 will try to search for _a_int_ in a library containing only _a_++INT++_. Since linkers can't use fuzzy string comparison (this could lead to a apocalypse imho) it won't find your function in the library. Also don't be fooled by the simplicity of this example: For every feature like namespace, class, method etc. there has to be a method implemented to map a function name to a entry point or memory structure.

Given your example you are lucky you use libraries of the same publisher who coded some logic to detect old libraries. Usually you will get something along the lines of <something> could not be resolved or some other convoluted, irritating and/or unhelpful error message.

Some info and experience dump regarding Visual Studio and libraries in general:

Edit addressing memory layout incompatibilities in addition to Tzigs answer: different name mangling schemes seem to be partially intentional to protect users against linkage against incompatible libraries. This answer goes into detail about it. The relevant passage from gcc docs:

G++ does not do name mangling in the same way as other C++ compilers. This means that object files compiled with one compiler cannot be used with another.

This effect is intentional [...].

Baumflaum
  • 749
  • 7
  • 20
0

Error C1047

This is caused by /GL Global optimization or /LTGC Link Time Code Generation

These use information in the .obj, to perform global optimizations. When present, VS looks at the compiler which generated the original .lib, and if they are different emits the error. These compilation switches are for code from a single compiler, and not intended for cross version usage.

The other builds which work, don't have the switches, so are compatible.

Visual studio has started to use a new #pragma detect_mismatch

This causes an old build to identify it is incompatible with a new build, by detecting the version change.

Very old builds didn't have / support the pragma, so had no checking.

When you build a lib, its dependencies are loaded and satisified by the linker, but this is not a guarantee of working. The one-definition-rule signs the developer up to a contract, that within a compiled binary, all implementations of the same named function are the same. If this came from different compilers, that may not be true, and so the linker can choose any, causing latent bugs, where mixtures of old and new code are linkeded into the binary.

If the definition or implementation of std::string has changed, it may link, but have code which is flawed.

This new compiler check, causes a fail early, which I thoroughly approve of.

mksteve
  • 12,614
  • 3
  • 28
  • 50