298

What is the difference between int, System.Int16, System.Int32 and System.Int64 other than their sizes?

Wai Ha Lee
  • 8,598
  • 83
  • 57
  • 92
Joby Kurian
  • 3,699
  • 4
  • 30
  • 38

12 Answers12

442

Each type of integer has a different range of storage capacity

   Type      Capacity

   Int16 -- (-32,768 to +32,767)

   Int32 -- (-2,147,483,648 to +2,147,483,647)

   Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

As stated by James Sutherland in his answer:

int and Int32 are indeed synonymous; int will be a little more familiar looking, Int32 makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer', Int32 where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an int if appropriate, but should take care changing Int32 variables in the same way.

The resulting code will be identical: the difference is purely one of readability or code appearance.

Community
  • 1
  • 1
  • 1
    If you know the value isn't going to be exceed 65,535(signed) or −32,768 to 32,767(unsigned) wouldn't it be better to define the integer as `int16` to save memory resource, opposed to simply using `int`? – ᴍᴀᴛᴛ ʙᴀᴋᴇʀ Apr 02 '14 at 11:30
  • 10
    int and int32 can be synonymous, but they need not be. Nowadays, most systems sold are 64-bit in which case an int will be 64 bits. – Martijn Otto Jan 13 '15 at 13:09
  • 4
    For Matthew T. Baker and any others like myself who came here trying to decide which to use from a performance standpoint, you should check out this post that suggests Integer is more efficient than Int16 in many cases: http://stackoverflow.com/questions/129023/net-integer-vs-int16 – Tony L. Mar 10 '15 at 02:34
  • 25
    @MartijnOtto The question is tagged C#. In C#, `int` is *always* `Int32`, regardless of the system. Perhaps you're thinking of C++? –  Jul 06 '15 at 07:31
  • 1
    I had `int` variable declaration and got a run time exception `Value was either too large or too small for an Int16. at System.Int16.Parse(String s, NumberStyles style, NumberFormatInfo info)` Replaced declaration with `Int32` and the problem was solved. – ajeh Dec 13 '16 at 19:16
  • 10
    @MattBaker: In general, on modern computers, an int16 takes as much space as an int32 (and actually an int64) because in order for most operations to be efficient, we pad around the data to make accesses aligned to 32 or 64 bit boundaries (in 32 or 64 bit modes respectively). This is because unaligned accesses are ridiculously inefficient on some architectures, and not possible on others. – Joel Mar 27 '17 at 21:21
  • 1
    @Matt Baker you have signed and unsigned mixed up in your comment. – michaelmsm89 Sep 15 '17 at 10:52
  • int is an alias for Int32, long is an alias for Int64 – Billu Dec 29 '22 at 11:18
135

The only real difference here is the size. All of the int types here are signed integer values which have varying sizes

  • Int16: 2 bytes
  • Int32 and int: 4 bytes
  • Int64 : 8 bytes

There is one small difference between Int64 and the rest. On a 32 bit platform assignments to an Int64 storage location are not guaranteed to be atomic. It is guaranteed for all of the other types.

JaredPar
  • 733,204
  • 149
  • 1,241
  • 1,454
80

int

It is a primitive data type defined in C#.

It is mapped to Int32 of FCL type.

It is a value type and represent System.Int32 struct.

It is signed and takes 32 bits.

It has minimum -2147483648 and maximum +2147483647 value.

Int16

It is a FCL type.

In C#, short is mapped to Int16.

It is a value type and represent System.Int16 struct.

It is signed and takes 16 bits.

It has minimum -32768 and maximum +32767 value.

Int32

It is a FCL type.

In C#, int is mapped to Int32.

It is a value type and represent System.Int32 struct.

It is signed and takes 32 bits.

It has minimum -2147483648 and maximum +2147483647 value.

Int64

It is a FCL type.

In C#, long is mapped to Int64.

It is a value type and represent System.Int64 struct.

It is signed and takes 64 bits.

It has minimum –9,223,372,036,854,775,808 and maximum 9,223,372,036,854,775,807 value.

Sieg
  • 554
  • 8
  • 17
Haris N I
  • 6,474
  • 6
  • 29
  • 35
  • Just to add that `Int64` data type can be represented using `L` or `l` suffix while `Int16` or `Int32` have not suffix in C#. – RBT May 03 '17 at 11:11
17

According to Jeffrey Richter(one of the contributors of .NET framework development)'s book 'CLR via C#':

int is a primitive type allowed by the C# compiler, whereas Int32 is the Framework Class Library type (available across languages that abide by CLS). In fact, int translates to Int32 during compilation.

Also,

In C#, long maps to System.Int64, but in a different programming language, long could map to Int16 or Int32. In fact, C++/CLI does treat long as Int32.

In fact, most (.NET) languages won't even treat long as a keyword and won't compile code that uses it.

I have seen this author, and many standard literature on .NET preferring FCL types(i.e., Int32) to the language-specific primitive types(i.e., int), mainly on such interoperability concerns.

Praveen
  • 394
  • 1
  • 5
  • 12
14

They tell what size can be stored in a integer variable. To remember the size you can think in terms of :-) 2 beers (2 bytes), 4 beers (4 bytes) or 8 beers (8 bytes).

  • Int16 :- 2 beers/bytes = 16 bit = 2^16 = 65536 = 65536/2 = -32768 to 32767

  • Int32 :- 4 beers/bytes = 32 bit = 2^32 = 4294967296 = 4294967296/2 = -2147483648 to 2147483647

  • Int64 :- 8 beers/bytes = 64 bit = 2^64 = 18446744073709551616 = 18446744073709551616/2 = -9223372036854775808 to 9223372036854775807

In short you can not store more than 32767 value in int16 , more than 2147483647 value in int32 and more than 9223372036854775807 value in int64.

To understand above calculation you can check out this video int16 vs int32 vs int64

int16vsint32vsint64

Abhijit
  • 382
  • 3
  • 10
Shivprasad Koirala
  • 27,644
  • 7
  • 84
  • 73
8

A very important note on the 16, 32 and 64 types:

if you run this query... Array.IndexOf(new Int16[]{1,2,3}, 1)

you are suppose to get zero(0) because you are asking... is 1 within the array of 1, 2 or 3. if you get -1 as answer, it means 1 is not within the array of 1, 2 or 3.

Well check out what I found: All the following should give you 0 and not -1 (I've tested this in all framework versions 2.0, 3.0, 3.5, 4.0)

C#:

Array.IndexOf(new Int16[]{1,2,3}, 1) = -1 (not correct)
Array.IndexOf(new Int32[]{1,2,3}, 1) = 0 (correct)
Array.IndexOf(new Int64[]{1,2,3}, 1) = 0 (correct)

VB.NET:

Array.IndexOf(new Int16(){1,2,3}, 1) = -1 (not correct)
Array.IndexOf(new Int32(){1,2,3}, 1) = 0 (correct)
Array.IndexOf(new Int64(){1,2,3}, 1) = -1 (not correct)

So my point is, for Array.IndexOf comparisons, only trust Int32!

tno2007
  • 1,993
  • 25
  • 16
  • 7
    To clarify why the first example works that way: the first literal 1, the 2, and the 3 are implicitly cast to `short` to fit them in the array, while the second literal 1 is left as an ordinary `int`. `(int)1` is not considered equal to `(short)1`, `(short)2`, `(short)3`, thus the result is -1. –  Mar 26 '14 at 14:43
  • 4
    There'd be a similar adjustment available to the C# versions, but FYI a simple type specifier fixes this issue: `Array.IndexOf(new Int16(){1,2,3}, 1S)` `Array.IndexOf(new Int32(){1,2,3}, 1I)` `Array.IndexOf(new Int64(){1,2,3}, 1L)` all work as expected. – Mark Hurd Jun 12 '14 at 02:06
  • 1
    And the ones that don't work have used the `Object[],Object` overload. C# is implicitly raising the `int` to a `long` when needed (and also raises a `short` to an `int` or `long`), but will not implicitly cast down, using the `object` overload instead. With `Option Strict On` or `Off` VB will only use the typed overload when provided the uniform types, otherwise it uses the `object` overload. – Mark Hurd Jun 12 '14 at 02:31
  • Your answer is misleading. The code is comparing values of different types. The conclusion `for Array.IndexOf comparisons, only trust Int32!` is wrong. If you cast the final `1` argument to the corresponding Array type, it works as expected. – Don Cheadle May 04 '17 at 19:45
  • This is a very interesting and unexpected behaviour (since int to long works but short to int doesn't) hence the upvote! – sth_Weird Jun 08 '17 at 08:12
  • @DonCheadle Yeah, maybe. But e.g. in Java the value within the array would be automatically be interpreted as `int` (or long if that's given) so this is certainly a little unexpected. – Maarten Bodewes May 19 '19 at 14:30
8

EDIT: This isn't quite true for C#, a tag I missed when I answered this question - if there is a more C# specific answer, please vote for that instead!


They all represent integer numbers of varying sizes.

However, there's a very very tiny difference.

int16, int32 and int64 all have a fixed size.

The size of an int depends on the architecture you are compiling for - the C spec only defines an int as larger or equal to a short though in practice it's the width of the processor you're targeting, which is probably 32bit but you should know that it might not be.

deanWombourne
  • 38,189
  • 13
  • 98
  • 110
  • 1
    This should be the accepted answer as this is the only one which is actually correct – mjs Nov 21 '14 at 10:31
  • 3
    No, this isn't true for C#. A C# int is always 32 bits in size. For C, yes you had to deal with this complication and you often see macros in C code to deal with variable int sizes. See http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf page 18. – Ananke Oct 05 '15 at 15:20
  • @Ananke Ahh, missed the C# tag. Wish I could revert some votes on the answers..... – mjs Sep 28 '16 at 08:31
7

Nothing. The sole difference between the types is their size (and, hence, the range of values they can represent).

7
  1. int and int32 are one and the same (32-bit integer)
  2. int16 is short int (2 bytes or 16-bits)
  3. int64 is the long datatype (8 bytes or 64-bits)
Sunil Kumar B M
  • 2,735
  • 1
  • 24
  • 31
  • 2
    int is not guaranteed to be 32 bits. – mjs Nov 21 '14 at 10:29
  • 10
    @mjs, this is simply not true. In C#, `int` is an alias for `Int32` and thus is always guaranteed to be 32 bits. – David Arno Oct 26 '15 at 11:59
  • 1
    Actually what mjs says is correct, INT means integer based on the system x86 or x64, so if your system is x64, int will be Int64, therefore is not guaranteed to be 32.. If you put int32 in a x64 will always be int32. – Yogurtu Apr 06 '18 at 17:09
  • 6
    No, what David Arno says is correct. The C# language specifically defines 'int' to mean a 32-bit integer (Int32). Other languages (C/C++, etc.) may not specify that, but this question is tagged 'C#'. – Theo Brinkman Sep 14 '18 at 17:40
  • @TheoBrinkman Correct, here's Microsoft's page on C#s integral numeric types: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/integral-numeric-types – Max Barraclough May 11 '20 at 08:52
5

They both are indeed synonymous, However i found the small difference between them,

1)You cannot use Int32 while creatingenum

enum Test : Int32
{ XXX = 1   // gives you compilation error
}

enum Test : int
{ XXX = 1   // Works fine
}

2) Int32 comes under System declaration. if you remove using.System you will get compilation error but not in case for int

this.girish
  • 1,296
  • 14
  • 17
0

The answers by the above people are about right. int, int16, int32... differs based on their data holding capacity. But here is why the compilers have to deal with these - it is to solve the potential Year 2038 problem. Check out the link to learn more about it. https://en.wikipedia.org/wiki/Year_2038_problem

iOS_nerd
  • 9
  • 2
  • 1
    Thank you for your answer. But the reason a compiler has to deal with different int-sized is not about the year 2038 problem. A compiler just compiles as given. And the solution to the Year 2038 is to use a larger int size indeed, but neither got the 64-bit integer invented because that that, nor does it solve the problem on 32-bit hardware. – McK Apr 25 '21 at 13:50
  • Yes, solution to the problem is using a larger int size. The compiler will never allow storing a large date which dosen't fit in the specified int16 or int32 type. We developers will also be clear that a int32 can only store -2,147,483,648 to 2,147,483,647. If we need to store a future timestamp we have use unsigned int32 – iOS_nerd Aug 14 '21 at 13:15
-11

Int=Int32 --> Original long type

Int16 --> Original int

Int64 --> New data type become available after 64 bit systems

"int" is only available for backward compatibility. We should be really using new int types to make our programs more precise.

---------------

One more thing I noticed along the way is there is no class named Int similar to Int16, Int32 and Int64. All the helpful functions like TryParse for integer come from Int32.TryParse.

рüффп
  • 5,172
  • 34
  • 67
  • 113
Mahesh
  • 3
  • 1