Using int
as the default data type tend to make programming easier, since it is large enough for most common use cases, and limits the risk someone makes a mistake with unsigned arithmetic. It will often have much greater range than actually needed, but that is fine, memory is cheap, and cpus may be optimized for accessing 32-bit chunks of data.
If you wanted the most precise datatype a byte
would be most appropriate, but what would you gain from using that instead of an int? There might be a point if you have millions of values, but that would be rare to store something like seconds in that way.
As mentioned in the comments, unsigned types are not CLS compliant, so will limit compatibility with other languages that do not support unsigned types. And this would be needed for a type like DateTime.
You should also prefer to use specific types over primitives. For example using TimeSpan to represent, well, a span of time.