4

One thing that bugs me about the regular c integer declarations is that their names are strange, "long long" being the worst. I am only building for 32 and 64 bit machines so I do not necessarily need the portability that the library offers, however I like that the name for each type is a single word in similar length with no ambiguity in size.

// multiple word types are hard to read
// long integers can be 32 or 64 bits depending on the machine

unsigned long int foo = 64;
long int bar = -64;

// easy to read
// no ambiguity

uint64_t foo = 64;
int64_t bar = -64;

On 32 and 64 bit machines:

1) Can using a smaller integer such as int16_t be slower than something higher such as int32_t?

2) If I needed a for loop to run just 10 times, is it ok to use the smallest integer that can handle it instead of the typical 32 bit integer?

for (int8_t i = 0; i < 10; i++) {

}

3) Whenever I use an integer that I know will never be negative is it ok to prefer using the unsigned version even if I do not need the extra range in provides?

// instead of the one above
for (uint8_t i = 0; i < 10; i++) {

}

4) Is it safe to use a typedef for the types included from stdint.h

typedef int32_t signed_32_int;
typedef uint32_t unsigned_32_int;

edit: both answers were equally good and I couldn't really lean towards one so I just picked the answerer with lower rep

YeOldeBitwise
  • 133
  • 1
  • 2
  • 8
  • 1
    have you had a look at http://stackoverflow.com/questions/9834747/reasons-to-use-or-not-stdint ? – Nic Apr 11 '17 at 04:42
  • 1
    Possible duplicate of [Reasons to use (or not) stdint](http://stackoverflow.com/questions/9834747/reasons-to-use-or-not-stdint) – Alex Lop. Apr 11 '17 at 04:52
  • They are similar however I had some specific questions about how to implement them that were not clearly answered in that post. Also the top answer from that post makes it sound like that you can't go wrong with fixed types but that is not necessarily true according the the two answers below which have already cleared it up. – YeOldeBitwise Apr 11 '17 at 04:56
  • http://stackoverflow.com/questions/163254/on-32-bit-cpus-is-an-integer-type-more-efficient-than-a-short-type – Support Ukraine Apr 11 '17 at 04:58

3 Answers3

4

Can using a smaller integer such as int16_t be slower than something higher such as int32_t?

Yes. Some CPUs do not have dedicated 16-bit arithmetic instructions; arithmetic on 16-bit integers must be emulated with an instruction sequence along the lines of:

r1 = r2 + r3
r1 = r1 & 0xffff

The same principle applies to 8-bit types.

Use the "fast" integer types in <stdint.h> to avoid this -- for instance, int_fast16_t will give you an integer that is at least 16 bits wide, but may be wider if 16-bit types are nonoptimal.

If I needed a for loop to run just 10 times, is it ok to use the smallest integer that can handle it instead of the typical 32 bit integer?

Don't bother; just use int. Using a narrower type doesn't actually save any space, and may cause you issues down the line if you decide to increase the number of iterations to over 127 and forget that the loop variable is using a narrow type.

Whenever I use an integer that I know will never be negative is it ok to prefer using the unsigned version even if I do not need the extra range in provides?

Best avoided. Certain C idioms do not work properly on unsigned integers; for instance, you cannot write a loop of the form:

for (i = 100; i >= 0; i--) { … }

if i is an unsigned type, because i >= 0 will always be true!

Is it safe to use a typedef for the types included from stdint.h

Safe from a technical perspective, but it'll annoy other developers who have to work with your code.

Get used to the <stdint.h> names. They're standardized and reasonably easy to type.

  • "using a narrower type [...]" - and using a type wider than processor's register size can even slow down the program... "just use int" right in general, one exception: if you operate on large values that possibly won't fit into int, one might prefer the [...]fast[...] types in such a case. "Best avoided" - again an exception: for(i = 0; i < ref; ++i) - if ref is unsigned for any reason, you avoid comparing signed and unsigned values, if i is unsigned, too. – Aconcagua Apr 11 '17 at 05:04
4

1) Can using a smaller integer such as int16_t be slower than something higher such as int32_t?

Yes it can be slower. Use int_fast16_t instead. Profile the code as needed. Performance is very implementation dependent. A prime benefit of int16_t is its small, well defined size (also it must be 2's complement) as used in structures and arrays, not so much for speed.

The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. C11 §7.20.1.3 2


2) If I needed a for loop to run just 10 times, is it ok to use the smallest integer that can handle it instead of the typical 32 bit integer?

Yes but that savings in code and speed is questionable. Suggest int instead. Emitted code tends to be optimal in speed/size with the native int size.


3) Whenever I use an integer that I know will never be negative is it OK to prefer using the unsigned version even if I do not need the extra range in provides?

Using some unsigned type is preferred when the math is strictly unsigned (such as array indexing with size_t), yet code needs to watch for careless application like

for (unsigned i = 10 ; i >= 0; i--) // infinite loop

4) Is it safe to use a typedef for the types included from stdint.h

Almost always. Types like int16_t are optional. Maximum portability uses required types uint_least16_t and uint_fast16_t for code to run on rare platforms that use bits widths like 9, 18, etc.

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
  • Regarding the optional: "These types are optional. *However,* if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names." (§ 7.20.1.1/3 N1570 "quasi C11"). I think **that** matches almost any implementation... (nowadays) – Daniel Jour Apr 11 '17 at 05:59
  • I would strongly discourage using 'fast' types in new code unless you have a specific reason. On most modern 64-bit architectures in general it will promote anything above the size of a char to a 64-bit wide type. Of which the performance implications are complex and in many cases will actually slow your code down. – Chuu Nov 08 '22 at 23:15
1
  1. Absolutely possible, yes. On my laptop (Intel Haswell), in a microbenchmark that counts up and down between 0 and 65535 on two registers 2 billion times, this takes

    1.313660150s - ax dx (16-bit)
    1.312484805s - eax edx (32-bit)
    1.312270238s - rax rdx (64-bit)
    

    Minuscule but repeatable differences in timing. (I wrote the benchmark in assembly, because C compilers may optimize it to a different register size.)

  2. It will work, but you'll have to keep it up to date if you change the bounds and the C compiler will probably optimize it to the same assembly code anyway.

  3. As long as it's correct C, that's totally fine. Keep in mind that unsigned overflow is defined and signed overflow is undefined, and compilers do take advantage of that for optimization. For example,

    void foo(int start, int count) {
        for (int i = start; i < start + count; i++) {
            // With unsigned arithmetic, this will execute 0 times if
            // "start + count" overflows to a number smaller than "start".
            // With signed arithmetic, that may happen, or the compiler
            // may assume this loop always runs "count" times.
            // For defined behavior, avoid signed overflow.
        }
    
  4. Yes. Also, POSIX provides inttypes.h which extends stdint.h with some useful functions and macros.

ephemient
  • 198,619
  • 38
  • 280
  • 391