1

In the following code :

#include<cstdio>
#define max(a,b) (a>b?a:b)
using namespace std;

int main()
{
    int f=10000000;

    long long i=(long long)max(1000000,f*f);
    printf("%lld",i);
    return 0;
}

I get the output of

276447232

But if I write

long long i=max((long long)1000000,(long long)f*f);

I get the output of

100000000000000

What is the difference between the two lines? Why doesn't the type conversion occur in the first case?

Iharob Al Asimi
  • 52,653
  • 6
  • 59
  • 97
Richa Tibrewal
  • 468
  • 7
  • 26

3 Answers3

6

This line says: "take the max of these two int values, the latter of which will overflow. Then take the result and cast it to long long". By the time the long long cast is called in the first case, it is too late. Signed integer overflow is undefined behavior.

long long i=(long long)max(1000000,f*f)

This line says: "take the max of these two long long values and return it". The reason (long long)f*f works is because it is casting the first f to long long before the multiplication, due to precedence. The second f is then promoted to long long for the multiplication to occur.

long long i=max((long long)1000000,(long long)f*f);
Cory Kramer
  • 114,268
  • 16
  • 167
  • 218
2

If you expand the macro in this line:

long long i=(long long)max(1000000,f*f);

You get:

long long i=(long long)(1000000>f*f?1000000:f*f)

The term in the parenthesis is data type int, so f*f is computed as an integer and overflows wrapping around to a value < 1000000.

Ron Kuper
  • 817
  • 5
  • 14
0

Due to C++ operator precedence rules,

(long long)f*f

typecasts before multiplication. That is, that expression is the same as:

((long long) f) * f

Because one of the arguments to the multiplication is a long long, the other argument (i.e. the bare f) is implicitly promoted to long long as well, and long long multiplication is performed with a long long result, avoiding overflow.

Had, instead, the multiplication operator preceded typecasting, then the expression would be interpreted as:

(long long) (f * f)

The multiplication, having two int arguments, would perform int multiplication, which given the value of f, overflows first and then is typecast to long long.

To be clear about your question,

Why doesn't the type conversion occur in the first case?

It does; but it occurs too late.

Matthew Moss
  • 1,258
  • 7
  • 16