58

In Core Data you can store Int16, Int32, Int64 but it is different from Int. What is the reason for their existence, how do you use them?

János
  • 32,867
  • 38
  • 193
  • 353
  • 3
    `Int` is like C's `int`: it is supposed to reflect the native word size (32 or 64 bit). However, `Int32` is (obviously) always 32-bit wide. – The Paramagnetic Croissant Dec 12 '14 at 09:08
  • 5
    @TheParamagneticCroissant: C's `int` is 32 bit on **all** current OS X and iOS architectures (even on 64 bit devices) and actually on all current Unix architectures that I know. Swift's `Int` is like Objective-C's `NSInteger` which is 32 or 64 bit. – Martin R Dec 12 '14 at 09:21
  • 2
    @MartinR Sorry if I was unclear, but I didn't mean to assert anywhere that C's `int` **is** 64-bit on 64-bit platforms. All I'm saying is that it was designed to be used as a word-sized integer; the fact that most modern implementations don't respect this intention is sad but irrelevant. Beware of the word "should" in my comment. I also used "like" (as opposed to "as") since I know their actual behavior is only similar, not identical. It's only the general idea of a platform-dependent integer size that is shared by the two data types, not the actual implementation details. – The Paramagnetic Croissant Dec 12 '14 at 10:37
  • My app's base deployment target is iOS 11, implies int == int64. So practically I use (and makes sense to use) just one type, ie int. I don't see any reason or benfit to use int16 or int32 here. – BangOperator Dec 30 '19 at 08:08

4 Answers4

91

According to the Swift Documentation

Int

In most cases, you don’t need to pick a specific size of integer to use in your code. Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size:

On a 32-bit platform, Int is the same size as Int32.

On a 64-bit platform, Int is the same size as Int64.

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

Cheolhyun
  • 169
  • 1
  • 7
Jørgen R
  • 10,568
  • 7
  • 42
  • 59
  • 8
    Thanks for this reference. I was led astray by misleading [Int documentation](https://developer.apple.com/library/prerelease/ios/documentation/Swift/Reference/Swift_Int_Structure/index.html#//apple_ref/swift/struct/s:Si), which says: `A 64-bit signed integer value type.` without mention of architecture. – Jon Brooks Oct 23 '15 at 03:55
  • 6
    using "Int" give problem when you are working with a big size numbers and the device is 4/4s because 64-bit processor was introduced from iPhone 5. I have learned this the hard way ;) – Rohan Sanap May 06 '16 at 11:20
  • FWIW, at least as of today (July 2017), the Int documentation correctly reflects the platform difference. – Palpatim Jul 25 '17 at 22:30
  • 1
    @RohanSanap I have to correct you, in case somebody relies on this information here: **64-bit was introduced with iPhone 5S** in 2013. The first iPad with 64-bit support was the iPad Air, also from 2013. – heyfrank Mar 14 '19 at 10:00
  • @fl034 oops. Thank you! Leaving my comment undeleted for readers' reference. – Rohan Sanap Mar 14 '19 at 10:05
  • make sure to coordinate with backend team so as to fix it to Int32 if theres some limit in data type used on server/cloud DB. – Ammar Mujeeb Aug 29 '22 at 11:15
7

Swift's Int is a wrapper because it has the same size as a platform capacity (Int32 on 32-bit platform and Int64 on 64-bit platform).

As a programmer, you should not declare a platform dependent Int (e.g. Int32, Int64) unless you really need it. For example, when you are working on 32-bit platform with numbers that cannot be represented using 4 bytes, then you can declare Int64 instead of Int.

  • Int32: 4 bytes: from −2147483648 to +2147483647
  • Int64: 8 bytes: from −9223372036854775808 to +9223372036854775807
Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
yoAlex5
  • 29,217
  • 8
  • 193
  • 205
4

The number placed after the "Int" is a reference to how many bits it uses. When you are using an int for something really short or almost unnecessary, like a running count of how many objects in a list that follow a specific parameter, I normally use an UInt8, which is an integer that has a maximum value of 256 (2^8), and a minimum value of 0, because it is unsigned (this is the difference between UInt8 and Int8). If it is signed, (or doesn't have a "U" prefixing the variable) then it can be negative. The same is true for Int16, Int32, and Int64. The bonus for using a smaller sized Int type is not very large, so you don't really need to use these if you don't want to.

C1FR1
  • 133
  • 1
  • 9
  • 7
    On modern platforms, types smaller than a certain size (often 32 bits, sometimes 64 bits) will just use all 32 bits— so using a type smaller than this will just end up with wasted space (e.g. an Int8 will use 8 bits with 24 bits of padding).  So on modern platforms, you're over-optimizing by doing this, _unless_ you're also using your Int8 as part of a SIMD vector, or packing arrays of Int8s up to send them to the GPU, or other similar high-performance many-value computation paths.  _(P.S. When I learned this it broke my heart that all of my careful type-tuning was for naught.)_ – Slipp D. Thompson Dec 15 '18 at 05:49
3

There is no performance savings if you are running on a laptop or iOS device with a 32 bit or 64 bit processor. Just use Int. The CPU doesn't just use 8 bits when you use Int8, the CPU will use its entire bit width regardless of what you use, because the hardware is already in the chip...

Now if you have an 8 bit CPU, then using Int32 would require the compiler to do a bunch of backflips and magic tricks to get the 32 bit Int to work

Mehdi
  • 772
  • 7
  • 16
  • 2
    This is not necessarily about performance, but about keeping semantics invariant to processor/architecture changes. – nbloqs Jan 26 '18 at 12:40
  • Your statement **the CPU will use its entire bit width regardless of what you use** is wrong. There are some architectures (e.g. supercomputers) that do not understand anything smaller than 64-bits, in which case the compiler must do gymnastics to process byte/string data. – John Hanley Oct 29 '22 at 09:30