1

Program 1:

#include <stdio.h>
int main()
{
    if (sizeof(int) > -1)
        printf("Yes");
    else
        printf("No");
    return 0;
}

Output : No

Program 2:

#include <stdio.h>
int main()
{
    if (2 > -1)
        printf("Yes");
    else
        printf("No");
    return 0;
}

Output: Yes

Questions:

  1. What is the difference between program 1 and program 2?
  2. Why sizeof(int) is considered as unsigned?
  3. Why is 2 in program 2 considered as signed?
Spikatrix
  • 20,225
  • 7
  • 37
  • 83
  • `2` is a `signed int` literal. `2u` would be `unsigned`. – user123 Aug 08 '15 at 06:12
  • Does this answer your question? [Why sizeof(int) is not greater than -1?](https://stackoverflow.com/questions/24466857/why-sizeofint-is-not-greater-than-1) – phuclv Sep 21 '21 at 07:13
  • [why is -1>strlen(t) true in C?)](https://stackoverflow.com/q/30295512/995714) – phuclv Sep 21 '21 at 07:14

3 Answers3

2

It is common issue with usual arithmetic conversions between signed and unsigned integers. The sizeof operator returns value of type size_t, that is some implementation-defined unsigned integer type, defined in <stddef.h> (see also this answer).

Integer constant -1 is of type int. When size_t is implemented as "at least" unsigned int (which is very likely to happen in your case), then both operands of binary operator < are converted to unsigned type. Unsigned value cannot be negative, hence -1 is conveted into a large number.

Community
  • 1
  • 1
Grzegorz Szpetkowski
  • 36,988
  • 6
  • 90
  • 137
1

The type of the value returned by the sizeof operator is size_t, which is specified to be an unsigned type (often equivalent to unsigned long).

Simple plain integer literals, like 2 or -1 are always of type int, and int is signed.

If you want an unsigned integer literal, you have to add the U suffix, like 2U.

Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
  • "*and int is signed*" -- Isn't it implementation-defined whether an `int` is `signed` or `unsigned`? – Spikatrix Aug 08 '15 at 06:33
  • 1
    @CoolGuy Yoiu're thinking about `char`, all other integer types are signed. – Some programmer dude Aug 08 '15 at 06:36
  • 2
    @Cool Guy @JoachimPileborg: Or a `int` bit-field. – cremno Aug 08 '15 at 07:21
  • Unfamiliar with the term "integer literals". Integer _decimal_ constants are alway `int, long` or `long long`. Integer _hexadecimal_ constants are `int, unsigned, long, unsigned long, long long` or `unsigned long long`. Code can have an `unsigned` constant without a `u` suffix like `0xFFFFFFFF` in a 32-bit world. – chux - Reinstate Monica Aug 08 '15 at 21:33
  • @chux A literal is any constant value embedded directly in the code, so you can have character literals like `'A'`, string literals like `"Foo"`, floating point literals like `12.34` and of course integer literals like `5678`. Some of these literals can be represented in the source code in different formats, for integer literals one can use decimal notation (the default), hexadecimal, octal, and lately binary notation as well. – Some programmer dude Aug 09 '15 at 02:43
  • Thanks. The use here of _literal_ vs. _constant_ was not clear to me. What you call "character literal", the C spec calls "character constant". Your "floating point literal", the spec calls "floating-point constant". Your "integer literals" are the C spec's "integer constants", The only "literals" specified are "string literals" and "compound (string) literals". These _constants_ and _string-literal_ are 2 forms of a _primary-expression_ amongst other uses. – chux - Reinstate Monica Aug 09 '15 at 03:46
0

This is because the sizeof operator returns a value in size_t. This is supposed to be an unsigned type, often implemented as unsigned int.

The number 2 by itself is an int, not unsigned int.

PC Luddite
  • 5,883
  • 6
  • 23
  • 39