This answer is for C but it's relevant here.
Generally if you are working with integers you should just use the int
type.
This is recommended generallly because most of the code that we "generally" encounter deals with type int
. And it's also not generally required for you to choose between the use of an int
and a uint
type.
Don't use unsigned types to enforce or suggest that a number must be positive. That's not what they're for.
This is quite subjective. You can very well use it to keep your program and data type-safe and needn't be bothered with dealing with the occasional errors that come due to the case of a negative integer.
"this is what The Go Programming Language recommends, with the specific example of uints being useful when you want to do bitwise operations"
This looks vague. Please add the source for this, I would like to read up on it.
x := -5
y := uint(x)
fmt.Println(y)
>> 18446744073709551611
This is typical of a number of languages. Logic behind this is that when you convert an int
type to a uint
, the binary representation used for the int
is kind of shoved into the uint
type. In the end, everything is just an abstraction over binary.
For example, take a look at this code and it's output:
a := int64(-123)
byteSliceRev := *(*[8]byte)(unsafe.Pointer(&a)) // The byte slice representation we get is LTR in increasing order of significance
u := uint(a)
byteSliceRevU := *(*[8]byte)(unsafe.Pointer(&u))
byteSlice, byteSliceU := make([]byte, 8), make([]byte, 8)
for i := 0; i < 8; i++ {
byteSlice[i], byteSliceU[i] = byteSliceRev[7-i], byteSliceRevU[7-i]
}
fmt.Println(u)
// 18446744073709551493
fmt.Printf("%b\n", byteSlice)
// [11111111 11111111 11111111 11111111 11111111 11111111 11111111 10000101]
fmt.Printf("%b\n", byteSliceU)
// [11111111 11111111 11111111 11111111 11111111 11111111 11111111 10000101]
The byte representation of both the int64
type of -5 is the same as for uint
type of 18446744073709551493.
So, my understanding is that I should always use int when dealing with whole numbers, regardless of sign, unless I find myself needing uint, and I'll know it when that's the case (I think???).
But isn't this more or less true of every code that "we" write.?!
Is this the right takeaway?
If so, why is this the case?
I hope I have answered these two questions. Feel free to ask me if you still have any doubts.
What's an example of when one should use uint? -- maybe a specific example, as opposed to "when doing binary operations", as I'm not sure I know what that means :)
Imagine a scenario in which you have a table in your database with a lot of entries with an integer for an id
, which is always positive. If you store this data as an int
one bit of every entry is effectively useless and when you scale this, you are losing a lot of space when you could have just used a uint
and saved it. Similar scenario can be thought of while transmitting data, transmitting tons of integers to be precise. Also, uint
has double the range for positive integers compared to their counterpart signed integers due to the extra bit, so it will take you longer to run out of numbers. Storage is cheap now so people generally ignore this supposedly minor gain.
The other usecase is type-safety. A uint
can never be negative so if a part of your code is delicate to negative numbers, it can prove to be pretty handy. It's better to get the error before wasting resource on the data just to find out it's impermissible because it's negative.