1

I just wrote a program (with all ushorts) to manipulate numbers and this is part of the code:

ushort digitSum = firstDigit + secondDigit + thirdDigit + fourthDigit;
ushort reverseNumber = fourthDigit + thirdDigit + secondDigit + firstDigit;
ushort lastDigitFirst = fourthDigit + firstDigit + secondDigit + thirdDigit;
ushort middleDigitsSwitched = firstDigit + thirdDigit + secondDigit + fourthDigit;

But it gives me int cast errors. I searched and I think this is the reason:

Why is ushort + ushort equal to int?

Why does C# throw casting errors when attempting math operations on integer types other than int?

It looks like performing operations with types other int requires casts, because ushort + ushort = int.

My questions: Why do we have shorts and bytes? When memory was very limited, we had to worry about which type to use, but do we have to worry about it now? It's annoying to use types smaller than an int, instead can I just use int and longs all the time? Is there a use case for smaller types today?

Edit: How do I do the cast on these to ushort?

Community
  • 1
  • 1
Code_Steel
  • 437
  • 2
  • 8
  • 14

1 Answers1

0

Is there a use case for smaller types today?

Yes, there is a case for it. Memory is still limited, especially when working with "big data", so saving half your memory by using a short instead of an int can be very valuable, especially if you need to process or store billions of items of data.

Edit: How do I do the cast on these to ushort?

You can just use a cast of the form (ushort):

ushort digitSum = (ushort)(firstDigit + secondDigit + thirdDigit + fourthDigit);
Reed Copsey
  • 554,122
  • 78
  • 1,158
  • 1,373