-8
  #include <stdio.h>
    {
        char num = 127;
        num = num + 1;
        printf("%d", num);
      return 0;
     }

Output is : -128

Edit : I was a newbie at the time I posted this question. Lol :).

Da3kL10rd
  • 340
  • 2
  • 6
  • 1
    How big do you think a `char` is, and what are its minimum and maximum values? – Steve Summit Sep 14 '19 at 13:31
  • For clarification Output is negative 128. – Da3kL10rd Sep 14 '19 at 13:31
  • @SteveSummit 1 byte – Da3kL10rd Sep 14 '19 at 13:32
  • On a 32-bit machine, you can typically see the same effect with `int num = 2147483647; num = num + 1;`. – Steve Summit Sep 14 '19 at 13:33
  • `char` is a *signed* byte. You might want to look at, for example, [Range of signed char](https://stackoverflow.com/questions/3898688/range-of-signed-char). Also study what is meant by [two's complement](http://en.wikipedia.org/wiki/Two's_complement). I have no idea what you mean by "should I add output shot". – lurker Sep 14 '19 at 13:33
  • `num = num + 1` causes an overflow. In your case it causes num to go from its highest possible value (127) to its lowest (-128). While this behavior is not guaranteed for *signed* variable, this is a common result. – AugustinLopez Sep 14 '19 at 13:37
  • @SteveSummit Yeah it's giving a negative value – Da3kL10rd Sep 14 '19 at 13:38
  • @lurker I'm just asking should i have to add image of my output. – Da3kL10rd Sep 14 '19 at 13:39
  • @lurker It's implementation-defined whether it's signed or not. – HolyBlackCat Sep 14 '19 at 13:40
  • @AugustinLopez Ohh .. this helped – Da3kL10rd Sep 14 '19 at 13:40
  • Sounds like you might like to learn how computers represent negative integers. Yours uses a scheme called [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement). It works really well, but it has this quirk: it wraps around from the biggest positive number to the most-negative number. – Steve Summit Sep 14 '19 at 13:41
  • @HolyBlackCat granted, in a few cases, but in this particular implementation, it's evidently signed. – lurker Sep 14 '19 at 13:42
  • @Da3kL10rd No need to post an image. We believe you. (Actually, plain-text posts are far preferred here over images.) – Steve Summit Sep 14 '19 at 13:43
  • @SteveSummit Okay – Da3kL10rd Sep 14 '19 at 13:45
  • 1
    @AugustinLopez: `num = num + 1` does not cause an overflow. `num` is automatically promoted to `int`, and then the addition is performed in `int`, which yields 128 without overflow. Then the assignment performs a conversion to `char`. This is not an overflow but, per C 2018 6.3.1.3, produces an implementation-defined result or signal. This differs from overflow because the C standard does not specify the behavior upon overflow at all, but, in this code, it specifies that the implementation must define the behavior. – Eric Postpischil Sep 14 '19 at 13:46
  • @EricPostpischil Sorry . I just read some rules for implementation-defined behaviour, unspecified behaviour . Helped me a lot . – Da3kL10rd Sep 14 '19 at 14:12

1 Answers1

1

char is a that, on most systems, takes 1 byte (8 bits). Your implementation seems to have char represent a signed type, however on other implementations it could be unsigned. The maximum value for a signed type is 2^(n-1)-1, where n is the number of bits. So the maximum value of char is 2^(8-1)-1=2^7-1=128-1=127. The minimum value is actually -2^(n-1). This means the minimun value is -128. When you add something that goes over the maximum value, it overflows and loops back to the minimum value. Hence, 127+1=-128 if you are doing char arithmetic.

You never use char for arithmetic. Use signed char or unsigned char instead. If you replace your char with unsigned char the program would print 128 as expected. Just note that the overflow can still happen (unsigned types have a range from 0 to 2^n-1, so unsigned char overflows if you add 1 to 255, giving you 0).

DarkAtom
  • 2,589
  • 1
  • 11
  • 27
  • (a) `char` appears to be a signed type in OP’s C implementation. The C standard allows it to be signed or unsigned. (b) The C standard does not specify that, for signed integers, arithmetic wraps. In arithmetic, overflow results in behavior not defined at all by the C standard. In conversions, it results in behavior the implementation must define. (c) A `char` is always one byte, by definition of the C standard. It is the number of bits in a byte that is flexible, not whether a `char` is one byte. – Eric Postpischil Sep 14 '19 at 13:48
  • @EricPostpischil This is the reason why you *never ever* use `char` for storing numbers. It's purpose is storing characters (ASCII values). I said it is a signed type because that's what I saw in the question. For example, in my implementation it is unsigned. – DarkAtom Sep 14 '19 at 13:50
  • 1
    A `char` is *always* 1 byte. What *can* change from system to system is the size of the byte. On modern architectures is almost always 8. It used to not be so standard. – giusti Sep 14 '19 at 16:25
  • @giusti No! The signedness of `char` can change from an implementation to another. This is allowed by the C standard. – DarkAtom Sep 14 '19 at 18:39
  • I think you misread me. What I'm saying is that `sizeof (char)` is always 1. You are right about signedness, of course. That wasn't my point. – giusti Sep 14 '19 at 18:40
  • @giusti The standard doesn't require `sizeof` to return the number of bytes a type ocuppies. The standard even allows `sizeof(int)` to be 1 – DarkAtom Sep 14 '19 at 18:59