-7

I am having trouble finding the output of this code. Please help me to find out the output of the following output segment.

#include<stdio.h>

int main(){
    char c = 4;
    c=c*200;
    printf("%d\n",c);
    return 0;
}

I want to know that why the output is giving 32. Would you please tell me? I want the exact calculations.

Nisse Engström
  • 4,738
  • 23
  • 27
  • 42

1 Answers1

3

Warning, long winded answer ahead. Edited to reference the C standard and to be clearer and more concise with respect to the question being asked.

The correct answer for why you have 32 has been given a few times. Explaining the math using modular arithmetic is completely correct but might make it a little harder to grasp intuitively if you are new to programming. So, in addition to the existing correct answers, here's a visualization.

Your char is an 8 bit type, so it is made up of a series of 8 zeros and ones.

Looking at the raw bits in binary, when unsigned (let's leave signed types out of it for a moment as it will just confuse the point) your variable 'c' can take on values in the following range:

00000000 -> 0
11111111 -> 255

Now, c*200 = 800. This is of course larger than 255. In binary 800 looks like:

00000011 00100000

To represent this in memory you need at least 10 bits (see the two 1's in the upper byte). As an aside, the leading zeros don't explicitly need to be stored since they have no effect on the number. However the next largest data type will be 16 bits and it's easier to show consistently sized groupings of bits anyway, so there it is.

Since the char type is limited to 8 bits and cannot represent the result, there needs to be a conversion. ISO/IEC 9899:1999 section 6.3.1.3 says:

6.3.1.3 Signed and unsigned integers

1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.

2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.

3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

So, if your new type is unsigned, following rule #2 if we subtract one more than the max value of the new type (256) from 800 we eventually end up in the range of the new type with 32. This behaviour also happens to effectively truncate the result, as you can see the higher bits which could not be represented have been discarded.

00100000 -> 32

The existing answers explain using the modulo operation, where 800 % 256 = 32. This is simply math that gives the remainder of a division operation. When we divide 800 by 256 we get 3 (because 256 fits into 800 at most three times) plus a remainder of 32. This is essentially the same as applying rule #2 here.

Hopefully this clarifies why you get a result of 32. However, as has been correctly pointed out, if the destination type is signed we're looking at rule #3, which says the behaviour is implementation-defined. Since the standard also says that the plain char type you are using may be signed or unsigned (and that this is implementation-defined) your particular case is then implementation-defined. However, in practice you will typically see the same behaviour where you lose the higher bits and hence you will still generally get 32.

Extending this a bit, if you were to have a signed 8-bit destination type, and you were to run your code with c=c*250 instead, you would have:

00000011 11101000 -> 1000

and you will probably find that after the conversion to the smaller signed type the result is similarly truncated as:

11101000

which in a signed type is interpreted as -24 for most systems which use two's complement. Indeed this is what happens when I run this on gcc, but again this is not guaranteed by the language itself.

Michael
  • 46
  • 4
  • I should also point out that you probably want to stay away from types like char when working with numbers. There are some C standard data types that are not only explicit in their size but also maintain them across platforms that might have a different natural word size. uint8_t, uint16_t, and uint32_t (and their signed equivalents) are useful if they are available. A few days ago I was debugging a unit test that included an MD5 (from a provider that shall remain nameless) and the test failed on our build server because they used a type that behaves differently on 64 bit Ubuntu. – Michael Dec 03 '17 at 20:06
  • 2
    Signed integer overflow **does not have any defined behaviour**. Anything can happen. The code can print for example `800`. You don't know your compiler. Stop pretending before you get burned. – Antti Haapala -- Слава Україні Dec 03 '17 at 20:29
  • Ouch, I think I just did :). You are completely correct of course, this behaviour is not guaranteed and should not be relied on. I only hoped to expand the explanation on how information is stored since that seems to be the root of the OP's problem. The omission is hopefully corrected now. – Michael Dec 03 '17 at 20:47
  • Though I was out of coffee and :F indeed there is no signed integer overflow here because the math happens in `int`s which will always fit 800 :D – Antti Haapala -- Слава Україні Dec 03 '17 at 20:54