There is very little good reason to ever use int* rather than sint*. The existence of these extra types is most likely for historical, backwards compatibility reasons, which Protocol Buffers tries to maintain even across its own protocol versions.
My best guess is that in the earliest version they dumbly encoded negative integers in 2's complement representation, which requires the maximally sized varint encoding of 9 octets (not counting the extra type octet). Then they were stuck with that encoding so as not to break old code and serializations that already used it. So, they needed to add a new encoding type, sint*, to get a better variably sized encoding for negative numbers while not breaking existing code. How the designers didn't realize this issue from the get-go is utterly beyond me.
The 64b varint encoding (without type specification, which requires 1 more octet) can encode an unsigned integer value in the following number of octets:
[0, 2^7): one octet
[2^7, 2^14): two octets
[2^14, 2^21): three octets
[2^21, 2^28): four octets
[2^28, 2^35): five octets
[2^35, 2^42): six octets
[2^42, 2^49): seven octets
[2^49, 2^56): eight octets
[2^56, 2^64): nine octets
If you want to similarly encode small magnitude negative integers compactly then you will need to "use up" one bit to indicate the sign. You can do this through an explicit sign bit (at some reserved position) and magnitude representation. Or, you can do zig zag encoding, which effectively does the same thing by left shifting the magnitude by 1 bit and subtracting 1 for negative numbers (so the least significant bit indicates the sign: evens are non-negative, odds are negative).
Either way, the cut over points at which positive integers require more space now comes a factor of 2 earlier:
[0, 2^6): one octet
[2^6, 2^13): two octets
[2^13, 2^20): three octets
[2^20, 2^27): four octets
[2^27, 2^34): five octets
[2^34, 2^41): six octets
[2^41, 2^48): seven octets
[2^48, 2^55): eight octets
[2^55, 2^63): nine octets
To make the case to use int* over sint*, negative numbers would have to be extremely rare, but possible, and/or the most common positive values you expect to encode would have to fall right around one of the cut over points that leads to a larger encoding in sint* as opposed to int* (e.g. - 2^6 vs. 2^7 leading to 2x encoding size).
Basically, if you are going to have numbers where some may be negative, then by default use sint* rather than int*. int* will very rarely be superior and usually won't even be worth the extra thought you have to dedicate towards judging whether it is worth it or not IMHO.