3

Imagine you have this function:

void foo(long l) { /* do something with l */}

Now you call it like so at the call site:

foo(65); // here 65 is of type int

Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?

Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?

I have found this in the C++ Standard:

4.7 Integral conversions [conv.integral]

5 The conversions allowed as integral promotions are excluded from the set of integral conversions.

That a narrowing conversion isn't being done implicitly, I can think with, but here the destination type is obviously wider than the source type.

EDIT

This question is based on a question I saw earlier, which had funny behavior when you didn't specify the L suffix. Example, but perhaps it's a C thing, more than C++?!!

Community
  • 1
  • 1
Tony The Lion
  • 61,704
  • 67
  • 242
  • 415
  • http://stackoverflow.com/a/5563131/195488 –  Mar 18 '13 at 16:12
  • 5
    I don't follow the question. Surely, the value *is* promoted to `long` when passed to the function, unless there's an overload which gives a better or ambiguous match for `int`? What error are you getting? – Mike Seymour Mar 18 '13 at 16:16
  • If this is in reference to an earlier question, things get tricky with variadic functions, but I don't see a problem with the code you have here. – Mat Mar 18 '13 at 16:17
  • @MikeSeymour: I don't think he has an error per se, just wondering why the `int` does not fill the `long`. –  Mar 18 '13 at 16:17
  • @Mat my question stems from that earlier question, and indeed it seems to have something do with with `va_args` and friends, maybe this question is redundant. – Tony The Lion Mar 18 '13 at 16:19
  • Who's to say what the literal value is interpreted as? If you're talking about actual declared variables, then OK, your quote may have a point, but here you're talking about a literal that will be compiled. Something tells me that's very compiler-dependent as to what happens, or at the least, it's another part of the standard. – Kevin Anderson Mar 18 '13 at 16:20
  • Also, in most modern compilers, isn't `int` and `long` actually the same type, as in both are 32-bit integers? It's `char` for 8-bit, `short` for 16-bit, and either `int` or `long` for 32-bit, with `long long` for 64-bit. See http://en.cppreference.com/w/cpp/language/types and it says how under Win32 and Linux/Unix, what I said is generally correct. – Kevin Anderson Mar 18 '13 at 16:23
  • @Kevin most non-Windows 64-bit platforms are LP64 i.e. `long` is 64 bits. – ecatmur Mar 18 '13 at 16:25
  • 1
    @Kevin Except when `char` isn't 8-bit, and both `int` and `long` aren't 32-bit and.. all those things exist. Doesn't have much to do with the compiler, but the architecture/platform the compiler is used on. – Voo Mar 18 '13 at 16:26
  • @ecatmur fair enough, I'll have to check my Linux install and see about the behavior of `long` as most of my work is on Win64, thus assuming `long long` was needed. I don't "completely" depend on my types being the same length everywhere, but it's good to know. – Kevin Anderson Mar 18 '13 at 16:30

5 Answers5

4

In C++ objects and values have a type, that is independent on how you use them. Then when you use them, if you need a different type it will be converted appropriately.

The problem in the linked question is that varargs is not type-safe. It assumes that you pass in the correct types and that you decode them for what they are. While processing the caller, the compiler does not know how the callee is going to decode each one of the arguments so it cannot possibly convert them for you. Effectively, varargs is as typesafe as converting to a void* and converting back to a different type, if you get it right you get what you pushed in, if you get it wrong you get trash.

Also note that in this particular case, with inlining the compiler has enough information, but this is just a small case of a general family if errors. Consider the printf family of functions, depending on the contents of the first argument each one of the arguments is processed as a different type. Trying to fix this case at the language level would lead to inconsistencies, where in some cases the compiler does the right thing or the wrong one and it would not be clear to the user when to expect which, including the fact that it could do the right thing today, and the wrong one tomorrow if during refactoring the function definition is moved and not available for inlining, or if the logic of the function changes and the argument is processed as one type or another based on some previous parameter.

David Rodríguez - dribeas
  • 204,818
  • 23
  • 294
  • 489
1

The function in this instance does receive a long, not an int. The compiler automatically converts any argument to the required parameter type if it's possible without losing any information (as here). That's one of the main reasons function prototypes are important.

It's essentially the same as with an expression like (1L + 1) - because the integer 1 is not the right type, it's implicitly converted to a long to perform the calculation, and the result is a long.

If you pass 65L in this function call, no type conversion is necessary, but there's no practical difference - 65L is used either way.

Although not C++, this is the relevant part of the C99 standard, which also explains the var args note:

If the expression that denotes the called function has a type that does include a prototype, the arguments are implicitly converted, as if by assignment, to the types of the corresponding parameters, taking the type of each parameter to be the unqualified version of its declared type. The ellipsis notation in a function prototype declarator causes argument type conversion to stop after the last declared parameter. The default argument promotions are performed on trailing arguments.

teppic
  • 8,039
  • 2
  • 24
  • 37
1

Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?

Because the type of a literal is specified only by the form of the literal, not the context in which it is used. For an integer, that is int unless the value is too large for that type, or a suffix is used to specify another type.

Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?

The value should be promoted to long whether or not you specify that type explicitly, since the function is declared to take an argument of type long. If that's not happening, perhaps you could give an example of code that fails, and describe how it fails?

UPDATE: the example you give passes the literal to a function taking untyped ellipsis (...) arguments, not a typed long argument. In that case, the function caller has no idea what type is expected, and only the default argument promotions are applied. Specifically, a value of type int remains an int when passed through ellipsis arguments.

Mike Seymour
  • 249,747
  • 28
  • 448
  • 644
0

The C standard states:

"The type of an integer constant is the first of the corresponding list in which its value can be represented."

In C89, this list is:

int, long int, unsigned long int

C99 extends that list to include:

long long int, unsigned long long int

As such, when you code is compiled, the literal 65 fits in an int type, and so it's type is accordingly int. The int is then promoted to long when the function is called.

If, for instance, sizeof(int) == 2, and your literal is something like 64000, the type of the value will be a long (assuming sizeof(long) > sizeof(int)).

The suffixes are used to overwrite the default behavior and force the specified literal value to be of a certain type. This can be particularly useful when the integer promotion would be expensive (e.g. as part of an equation in a tight loop).

TRISAbits
  • 455
  • 4
  • 9
0

We have to have a standard meaning for types because for lower level applications, the type REALLY matters, especially for integral types. Low level operators (such as bitshift, add, ect) rely on the type of the input to determine overflow locations. ((65 << 2) with integers is 260 (0x104), but with a single char it is 4! (0x004)). Sometimes you want this behavior, sometimes you don't. As a programmer, you just need to be able to always know what the compiler is going to do. Thus the design decision was made to make the human explicitly declare the integral types of their constants, with "undecorated" as the most commonly used type, integer.

The compiler does automatically "cast" your constant expressions at compile time, such that the effective value passed to the function is long, but up until the cast it is considered an int for this reason.

IdeaHat
  • 7,641
  • 1
  • 22
  • 53