335

In C, the integer (for 32 bit machine) is 32 bits, and it ranges from -32,768 to +32,767. In Java, the integer(long) is also 32 bits, but ranges from -2,147,483,648 to +2,147,483,647.

I do not understand how the range is different in Java, even though the number of bits is the same. Can someone explain this?

Community
  • 1
  • 1
stackuser
  • 4,081
  • 4
  • 21
  • 27

11 Answers11

440

In C, the language itself does not determine the representation of certain datatypes. It can vary from machine to machine, on embedded systems the int can be 16 bit wide, though usually it is 32 bit.

The only requirement is that short int <= int <= long int by size. Also, there is a recommendation that int should represent the native capacity of the processor.

All types are signed. The unsigned modifier allows you to use the highest bit as part of the value (otherwise it is reserved for the sign bit).

Here's a short table of the possible values for the possible data types:

          width                     minimum                         maximum
signed    8 bit                        -128                            +127
signed   16 bit                     -32 768                         +32 767
signed   32 bit              -2 147 483 648                  +2 147 483 647
signed   64 bit  -9 223 372 036 854 775 808      +9 223 372 036 854 775 807
unsigned  8 bit                           0                            +255
unsigned 16 bit                           0                         +65 535
unsigned 32 bit                           0                  +4 294 967 295
unsigned 64 bit                           0     +18 446 744 073 709 551 615

In Java, the Java Language Specification determines the representation of the data types.

The order is: byte 8 bits, short 16 bits, int 32 bits, long 64 bits. All of these types are signed, there are no unsigned versions. However, bit manipulations treat the numbers as they were unsigned (that is, handling all bits correctly).

The character data type char is 16 bits wide, unsigned, and holds characters using UTF-16 encoding (however, it is possible to assign a char an arbitrary unsigned 16 bit integer that represents an invalid character codepoint)

          width                     minimum                         maximum

SIGNED
byte:     8 bit                        -128                            +127
short:   16 bit                     -32 768                         +32 767
int:     32 bit              -2 147 483 648                  +2 147 483 647
long:    64 bit  -9 223 372 036 854 775 808      +9 223 372 036 854 775 807

UNSIGNED
char     16 bit                           0                         +65 535
gaborsch
  • 15,408
  • 6
  • 37
  • 48
  • 11
    The C standard also specifies minimum values for INT_MAX, LONG_MAX, etc. – Oliver Charlesworth Feb 21 '13 at 14:51
  • 17
    Java 8 now has unsigned Integer as well: http://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html – Jakub Kotowski Jun 26 '14 at 21:56
  • 4
    Thanks, @jkbkot, good to know that. Although it seems that the representation is still signed, but certain unsigned operations are implemented as a function. It's hard to add two unsigned `int`s... – gaborsch Jun 27 '14 at 07:35
  • 6
    @GaborSch In Java, `int foo = Integer.MAX_VALUE + 1; System.out.println(Integer.toUnsignedLong(foo));` prints `2147483648` and [_char_ is an unsigned type](https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html) – howlger Oct 07 '17 at 09:38
  • 2
    @howlger `Integer.MAX_VALUE + 1` is `0x80000000` in hex, because of the overflow (and equals to `Integer.MIN_VALUE`). If you convert it to unsigned (long), the sign bit will be treated like a value bit, so it will be `2147483648`. Thank you for the `char` note. `char` is unsigned, you're right, but char is not really used for calculations, that's why I left it from the list. – gaborsch Oct 08 '17 at 17:59
  • The top bit of a signed number is not used for a 'sign bit'. The top bit represents (for an 8 bit value) -256 or 0. If it was a sign bit the range of values around 0 would be symmetrical. Your comment is only true for a one's complement representation, not the two's complement which is in use in all modern machines. – Dale Stanbrough Jun 30 '19 at 01:46
  • @DaleStanbrough You are right in the sense that modern processors are using 2's complements. In the tables, it is clearly visible that e.g. -128 has an 8-bit representation (in 1's complement it is not possible). Still, my remark holds, because if highest bit is set, it represents a negative number, otherwise a non-negative number, thus this is the _sign bit_. You can call it _negative sign bit_, if you want, but it's the same for 1's and 2's complements. – gaborsch Jul 01 '19 at 08:45
  • 1
    Note also that the Java language specification *specifies* that twos-complement representation is used. (The C / C++ specs don't yet, I think, but http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2218.htm ) – Stephen C Feb 14 '20 at 07:34
83

In C, the integer(for 32 bit machine) is 32 bit and it ranges from -32768 to +32767.

Wrong. 32-bit signed integer in 2's complement representation has the range -231 to 231-1 which is equal to -2,147,483,648 to 2,147,483,647.

unwind
  • 391,730
  • 64
  • 469
  • 606
Kos
  • 70,399
  • 25
  • 169
  • 233
24

A 32 bit integer ranges from -2,147,483,648 to 2,147,483,647. However the fact that you are on a 32-bit machine does not mean your C compiler uses 32-bit integers.

Ivaylo Strandjev
  • 69,226
  • 18
  • 123
  • 176
  • 1
    At least my copy of Mr. Kernighan and Mr. Ritchies "The C programming language" says in A4.2 that `int` is of the "natural width of the machine" which I'd interpret as 32 bits when compiling for 32 bit machines. – junix Feb 21 '13 at 14:52
  • 7
    This depends on the compiler, not the machine I believe. I had a 16 bit compiler installed on my 64 bit machine for instance. – Ivaylo Strandjev Feb 21 '13 at 14:55
  • Of course your 16 bit compiler for 16 bit x86 code did only use 16 bits. But that was not my point. Even a 32 bit x86 processor running in 16 bit mode has only a native capacity is only of 16 bits. My point is that the target platform the compiler has matters. E.g. if you have a compiler for your 80286 you will still generate 16-bit code and hence have 16 bit integers. – junix Feb 21 '13 at 15:06
  • 2
    @junix I believe that is exactly what I point out in my answer. It is not the OS that specifies how many bits do your integers have. Target platform is a property of the compiler, not of the OS it is working on or the processor you have. – Ivaylo Strandjev Feb 21 '13 at 15:08
  • As I wrote in my first comment. "It's 32-bits when compiling for 32 bit machines". The OP writes in his posting "the integer(for 32 bit machine)" So from what I understand he is not referring to his OS, or his machine, he is referring to his target platform – junix Feb 21 '13 at 15:13
  • @junix and you **really think** that in his 32 bit C compiler the integer ranges from `-32768 to +32767`? Doesn't it seem to you that the OP mixes OS and compiler bit-ness? – Ivaylo Strandjev Feb 21 '13 at 15:15
  • No, it seems to me that the OP read somewhere about the **minimum ranges** for `int` which is indeed `-32768 to +32767` but did not notice the fact that this is only a minimum and the real range is machine dependent. And exactly the link to machine dependency is what I'm missing in your explaination. Hence my first comment. – junix Feb 21 '13 at 15:18
  • @junix C specifies minimum range for `int` to `-32767` not `-32768`. – ouah Feb 21 '13 at 16:41
  • @ouah first, i copied the range from ivaylo without thinking about ot. Second the range of a 2 complement 16 bit number is according to my literature starting with -32768. – junix Feb 21 '13 at 17:08
16

The C language definition specifies minimum ranges for various data types. For int, this minimum range is -32767 to 32767, meaning an int must be at least 16 bits wide. An implementation is free to provide a wider int type with a correspondingly wider range. For example, on the SLES 10 development server I work on, the range is -2147483647 to 2137483647.

There are still some systems out there that use 16-bit int types (All The World Is Not A VAX x86), but there are plenty that use 32-bit int types, and maybe a few that use 64-bit.

The C language was designed to run on different architectures. Java was designed to run in a virtual machine that hides those architectural differences.

John Bode
  • 119,563
  • 19
  • 122
  • 198
  • 1
    For 16-bit int, it is -3276**8** to 32767. For 32-bit int, it is -214748364**8** to 2147483647. Range is specified from -2^(n bits-1) to +2^(n bits-1) - 1. – mythicalcoder May 24 '15 at 15:05
  • 4
    @Maven: 5.2.4.2.1 - `INT_MIN` is specified as `-32767`. Don't assume two's complement. – John Bode May 24 '15 at 18:39
10

The strict equivalent of the java int is long int in C.

Edit: If int32_t is defined, then it is the equivalent in terms of precision. long int guarantee the precision of the java int, because it is guarantee to be at least 32 bits in size.

UmNyobe
  • 22,539
  • 9
  • 61
  • 90
8

The poster has their java types mixed up. in java, his C in is a short: short (16 bit) = -32768 to 32767 int (32 bit) = -2,147,483,648 to 2,147,483,647

http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html

Brill Pappin
  • 4,692
  • 1
  • 36
  • 36
7

That's because in C - integer on 32 bit machine doesn't mean that 32 bits are used for storing it, it may be 16 bits as well. It depends on the machine (implementation-dependent).

BlueLettuce16
  • 2,013
  • 4
  • 20
  • 31
  • 1
    Well, it's worth noting that the typical implementation behavior is using "machine width" for `int`. But `limits.h` helps out to find out what's the exact truth – junix Feb 21 '13 at 14:48
  • 3
    But in reality, I don't think a C compiler for 32 has ever been made without int as 32 bits. The standard may allow the compiler implementation of int to be of a moronic nature, but for some reason, nobody wants to make a moronic C compiler. The trend is to make useful C compilers. – Lundin Feb 21 '13 at 15:57
5

Actually the size in bits of the int, short, long depends on the compiler implementation.

E.g. on my Ubuntu 64 bit I have short in 32 bits, when on another one 32bit Ubuntu version it is 16 bit.

Alex
  • 9,891
  • 11
  • 53
  • 87
5

It is actually really simple to understand, you can even compute it with the google calculator: you have 32 bits for an int and computers are binary, therefore you can have 2 values per bit (spot). if you compute 2^32 you will get the 4,294,967,296. so if you divide this number by 2, (because half of them are negative integers and the other half are positive), then you get 2,147,483,648. and this number is the biggest int that can be represented by 32 bits, although if you pay attention you will notice that 2,147,483,648 is greater than 2,147,483,647 by 1, this is because one of the numbers represents 0 which is right in the middle unfortunately 2^32 is not an odd number therefore you dont have only one number in the middle, so the possitive integers have one less cipher while the negatives get the complete half 2,147,483,648.

And thats it. It depends on the machine not on the language.

Emos Turi
  • 156
  • 1
  • 8
  • 4
    This is not what he asked for... the question is "why C int is different from Java int?" – ElectronWill Jan 03 '20 at 10:37
  • 2
    And in Java, the size of `int` **does not** depend on the machine. `int` == 32-bit signed, two's-complement is defined by the Java language specification, and engraved on sheets of anodized [unobtainium](https://en.wikipedia.org/wiki/Unobtainium). (OK, maybe not the last bit.) – Stephen C Feb 14 '20 at 07:30
2

In C range for __int32 is –2147483648 to 2147483647. See here for full ranges.

unsigned short 0 to 65535
signed short –32768 to 32767
unsigned long 0 to 4294967295
signed long –2147483648 to 2147483647

There are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types'.

In Java

The int data type is a 32-bit signed two's complement integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive).

Achintya Jha
  • 12,735
  • 2
  • 27
  • 39
0

in standard C, you can use INT_MAX as the maximum 'int' value, this constant must be defined in "limits.h". Similar constants are defined for other types (http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.5.html), as stated, these constant are implementation-dependent but have a minimum value according to the minimum bits for each type, as specified in the standard.

Carlos UA
  • 33
  • 1
  • 5
    This doesn't really get around to addressing the OP's question. Also, core parts of an answer really shouldn't be buried on another site. – Brad Koch May 27 '14 at 14:01