1
#include <stdio.h>

int main()

{
    unsigned int count=1;

    signed int val = -20;

    signed int div=(val/count);

    signed int div1=(val/(signed int)count);

    printf("div %d div1 %d \n",div,div1);
    return 0;
 }

output :

div -20 div1 -20

But if count = 5 then the output :

div 858993455 div1 -4 

In count=5 case signed int has been implicitly converted to unsigned int, why not for count=1

undur_gongor
  • 15,657
  • 5
  • 63
  • 75
Apsa
  • 21
  • 4

3 Answers3

5
signed int div=(val/count);

If one of operands is int and another one is unsigned int, the int one is converted to unsigned int. So here val is converted to unsigned int, then divided by count, then the result is converted back to int.

That is, this expression is evaluated as

int div = (int)((unsigned int)val / count);

So when count == 1, the result remains the same, but when count == 5 the result of (unsigned int)val / count becomes less than INT_MAX, so when converted back to int it doesn't change its (big positive) value.

Note that strictly speaking, even if count == 1 the result doesn't have to be -20, because the result of conversion from (unsigned int)-20 to int is implementation-defined.

Anton Savin
  • 40,838
  • 8
  • 54
  • 90
1

There is no such thing as "implicit typecast", typecast refers to explicitly changing the type of an operand. The proper term is implicit (type) conversion.

The fundamentals of C state that the compiler is free to order around or optimize your program as it pleases, as long as it doesn't change the outcome of the program.

So even if the compiler spots that a division by 1 doesn't make sense and can get optimized away, it must still take the potential side-effects caused by the implicit type conversion in account: it cannot optimize those away, because the programmer might have been intentionally relying on them.

In your specific case, signed int div=(val/count) would enforce val to get implicitly converted to unsigned type. But it doesn't really matter, as you show the results back into a signed type and anything divided by 1 will remain unchanged anyhow. So the compiler can therefore optimize the whole thing away as the result would have been the same no matter if unsigned or signed arithmetic was used.

If you divide by 5 though, the results turn very different between -20/5 = -4 and 0xFFFFFFEC/5 = 0xFFFFFFFC. So then the compiler is not allowed to optimize away the implict conversion, as it affects the result.

Therefore the programmer has to know the implicit type conversion rules to tell what will actually happen between the lines of their own source code.

Community
  • 1
  • 1
Lundin
  • 195,001
  • 40
  • 254
  • 396
0

This is the the usual arithmetic conversions. you can find rule there.

So actually, the first result is (unsigned)-4. use complement rule, it will be 858993455.

you can also reference Implicit conversions

Community
  • 1
  • 1
edwardramsey
  • 363
  • 2
  • 12
  • 1
    You should include the relevant part of the link in your answer, because if the link somehow dies, your answer would become useless. – Arun A S Mar 24 '15 at 12:05
  • 1
    There is nothing called "the typecast rule". Typecasting refers to explicitly changing the type of an operand, rather than the compiler implicitly doing so. The "typecast rule" you link to is formally named _the usual arithmetic conversions_, or informally _balancing_. – Lundin Mar 24 '15 at 12:11
  • @Lundin: That. Adjusted the question too. – undur_gongor Mar 24 '15 at 12:18
  • Btw the first result is rather `(unsigned int)-20/5`. The conversion from two's complement to plain unsigned form takes place before the division is executed. – Lundin Mar 24 '15 at 12:38