17

Consider following example code:

#include <iostream>
#include <inttypes.h>

using namespace std;

int f(uint32_t i)
{
  return 1;
}
int f(uint64_t i)
{
  return 2;
}

int main ()
{
  cout << sizeof(long unsigned) << '\n';
  cout << sizeof(size_t) << '\n';
  cout << sizeof(uint32_t) << '\n';
  cout << sizeof(uint64_t) << '\n';
  //long unsigned x = 3;
  size_t x = 3;
  cout << f(x) << '\n';
  return 0;
}

This fails on Mac OSX with:

$ g++ --version
i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)
$ make test
g++     test.cc   -o test
test.cc: In function 'int main()':
test.cc:23: error: call of overloaded 'f(size_t&)' is ambiguous
test.cc:6: note: candidates are: int f(uint32_t)
test.cc:10: note:                 int f(uint64_t)
make: *** [test] Error 1

Why? Because 'size_t' should be unsigned and either 32 bit or 64 bit wide. Where is the ambiguity then?

Trying the same with 'unsigned long x' instead of 'size_t x' results in an analogous ambiguity error message.

On Linux/Solaris systems, testing with different GCC versions, different architectures etc. there is no ambiguity reported (and the right overload is used on each architecture).

Is this a Mac OS X bug or a feature?

maxschlepzig
  • 35,645
  • 14
  • 145
  • 182
  • Not sure, but `size_t` may be signed type – BЈовић Jul 22 '12 at 20:50
  • 9
    @BЈовић No the standard requires `size_t` to be unsigned. §18.2/6 says: "The type size_t is an implementation-defined unsigned integer type that is large enough to contain the size in bytes of any object." – Philipp Jul 22 '12 at 20:53
  • Though gcc erroneously had it as a signed type for a while, IIRC. – cp.engr Oct 01 '15 at 21:27
  • Another example: `size_t r; /* ... */ boost::endian::big_to_native_inplace(r); `. On Linux/Solaris this works, on Mac OSX this breaks due to an ambiguity compile error - because boost::endian only provides overloads for the fixed width integer types. See also: https://github.com/boostorg/endian/pull/14 – maxschlepzig May 27 '17 at 14:54

2 Answers2

12

Under Mac OS, those types are defined as:

typedef unsigned int         uint32_t;
typedef unsigned long long   uint64_t;

Where as size_t is defined as __SIZE_TYPE__:

#if defined(__GNUC__) && defined(__SIZE_TYPE__)
typedef __SIZE_TYPE__       __darwin_size_t;    /* sizeof() */
#else
typedef unsigned long       __darwin_size_t;    /* sizeof() */
#endif

So if you change your code to:

#include <iostream>
#include <inttypes.h>

using namespace std;

int f(uint32_t i)
{
  return 1;
}
int f(uint64_t i)
{
  return 2;
}

int f (unsigned long i)
{
  return 3;
}

int main ()
{
  cout << sizeof(unsigned long) << '\n';
  cout << sizeof(size_t) << '\n';
  cout << sizeof(uint32_t) << '\n';
  cout << sizeof(uint64_t) << '\n';
  //long unsigned x = 3;
  size_t x = 3;
  cout << f(x) << '\n';
  return 0;
}

And run it, you will get:

$ g++ -o test test.cpp
$ ./test
8
8
4
8
3
trojanfoe
  • 120,358
  • 21
  • 212
  • 242
  • 3
    So the gist of this is: `long` and `long long` are distinct types, even if they have the same signedness and width. (The situation is the same as with other builtin types, e.g. `char` and `signed char` are distinct types.) – Philipp Jul 22 '12 at 20:56
  • @Philipp Yes, that would appear so. – trojanfoe Jul 22 '12 at 20:59
  • 1
    Well, then I get a redefinition error on Linux (e.g. on a 64 Bit Fedora 17 system between the 2nd and 3rd overload) ... – maxschlepzig Jul 22 '12 at 21:12
  • @maxschlepzig Why not create a `f(size_t)` function instead to avoid the issue? – trojanfoe Jul 22 '12 at 21:19
  • 3
    @trojanfoe, because I want to do low-level 32/64 bit related optimizations and use size_t to select the version for the native machine word size. Using the uint32_t/uint64_t overloads allows for such a specialization - except for Mac OS X, of course. – maxschlepzig Jul 22 '12 at 22:20
6

If you really want to, you could implement your desired semantics like this:

#define IS_UINT(bits, t) (sizeof(t)==(bits/8) && \
                          std::is_integral<t>::value && \
                          !std::is_signed<t>::value)

template<class T> auto f(T) -> typename std::enable_if<IS_UINT(32,T), int>::type
{
  return 1;
}

template<class T> auto f(T) -> typename std::enable_if<IS_UINT(64,T), int>::type
{
  return 2;
}

Not saying this is a good idea; just saying you could do it.

There may be a good standard-C++ way to ask the compiler "are these two types the same, you know what I mean, don't act dumb with me", but if there is, I don't know it.


2020 UPDATE: You could have done it more idiomatically without macros. C++14 gave us the shorthand enable_if_t and C++17 gave us is_integral_v:

template<int Bits, class T>
constexpr bool is_uint_v = 
    sizeof(T)==(Bits/8) && std::is_integral_v<T> && !std::is_signed_v<T>;

template<class T> auto f(T) -> std::enable_if_t<is_uint_v<32, T>, int>
    { return 1; }

template<class T> auto f(T) -> std::enable_if_t<is_uint_v<64, T>, int>
    { return 2; }

And then in C++20 we have the even-shorter-shorthand requires:

template<int Bits, class T>
constexpr bool is_uint_v =
    sizeof(T)==(Bits/8) && std::is_integral_v<T> && !std::is_signed_v<T>;

template<class T> int f(T) requires is_uint_v<32, T> { return 1; }
template<class T> int f(T) requires is_uint_v<64, T> { return 2; }

and even-shorter-shorter-shorthand "abbreviated function templates" (although this is getting frankly obfuscated and I would not recommend it in real life):

template<class T, int Bits>
concept uint =
    sizeof(T)==(Bits/8) && std::is_integral_v<T> && !std::is_signed_v<T>;

int f(uint<32> auto) { return 1; }  // still a template
int f(uint<64> auto) { return 2; }  // still a template
Quuxplusone
  • 23,928
  • 8
  • 94
  • 159