4

In the new Apple Swift documentation, it says:

Int

In most cases, you don’t need to pick a specific size of integer to use in your code. Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size:

On a 32-bit platform, Int is the same size as Int32. On a 64-bit platform, Int is the same size as Int64. Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

I can understand that when using APIs that are defined with "Int", you should use them.

But for my own code, I've always been strict about using the proper bit sized types in C using the stdint header. My thought was I'd try to reduce ambiguity. However, the folks at Apple are pretty smart, and I'm wondering if I'm missing something since this is not what they recommend.

SuperDuperTango
  • 1,398
  • 1
  • 14
  • 32
  • 1
    You pretty much have their rationale right there. Opinions are divided on whether this is a good idea. – user2357112 Jun 07 '14 at 01:46
  • More on the subject: http://stackoverflow.com/questions/6315969/when-to-use-different-integer-types – R Sahu Jun 07 '14 at 01:52

3 Answers3

9

This subject is not widely agreed upon.

The advantage of using a generic type is portability. The same piece of code will compile and run independent of the platform's word size. It may also be faster, in some cases.

The advantage of using specific types is precision. There is no room for ambiguity, and the exact capabilities of the type are known ahead of time.

There is no hard true answer. If you stick to either side of the matter for any and all purposes, you will sooner or later find yourself making an exception.

salezica
  • 74,081
  • 25
  • 105
  • 166
  • 1
    +1 for the exception thought. – chux - Reinstate Monica Jun 07 '14 at 02:15
  • 1
    "compile and run" carries implicit assumptions; although I might *assume* that a 32-bit signed native word size is what the target will have, it might be only 16-bits and thus make my "smallish 20-bit integer value" overflow horribly - or does Swift have a minimum guarantee somewhere? – user2864740 Jun 07 '14 at 02:22
  • @user2864740 Swift is not formally specified, so there is no minimum guaranteed. But most probably, if it had to be formally specified, it'll make the same choice most modern languages with similar characteristics made: a guaranteed minimum of 31 bits. – Analog File Jun 07 '14 at 02:43
0

It is recommended where it is beneficial. In API stuffs, platform using word-sized int makes sense because it can provide biggest numeric range at best performance.

Anyway, in some fields (like database) explicit, and clear numeric range is more important. In that case it's better to use explicit sized integer.

Apples also uses such int64 stuffs for CoreData and CloudKit for clear numeric limits. Both are database stuffs. If you consider Apple policy much, this might helpful.

Anyway, Swift is more C++ like, if you need more precise numeric range control, you can define your own numeric type with custom operators. Actually some of int types in Swift are defined as a struct.

For example, here's declaration of Int16 in Swift module,

struct Int16 : SignedInteger {
    var value: Builtin.Int16
    init()
    init(_ v: Builtin.Int16)
    init(_ value: Int16)
    static func convertFromIntegerLiteral(value: Int16) -> Int16
    typealias ArrayBoundType = Int16
    func getArrayBoundValue() -> Int16
    static var max: Int16 { get }
    static var min: Int16 { get }
}
eonil
  • 83,476
  • 81
  • 317
  • 516
0

If you want your code to use your code on different platforms (16 bits, 32bits, 64bits), follow apple recommendation. Why? Let's take an example:

You decide to use int64 for your return variable ret. That will mainly return 0 or a negative value. It will go fine on both 64-bit device and 32-bit device, although it will take some extra instructions on the 32-bit beast. However it could start to slow down execution on a 16-bit device because the system needs four words to store your variable on such system.

If you had chosen a bare int, the SDK would have made your variable 32-bit wide on 32-bit devices, 64-bit wide on 64-bit machines, 16-bit wide on 16-bit machines, etc. All ok with the rest of the system (libraries, framework, whatever you work with). So not only better within the system, but better to port on other platforms.

But my example only stands true for return variables, i.e. variables that take very limited number of values...

m-ric
  • 5,621
  • 7
  • 38
  • 51