9

I have started using OpenCL library lately and I've noticed that they are using their own integer types, like cl_int and cl_uint instead of int and unsigned int.

Why is that? Why don't they use the types that are by default in the language? Is it a good practice or are there practical reasons for this (i.e. more readable code)?

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
Addy
  • 1,400
  • 10
  • 20

2 Answers2

12

The reason why this has been done in the past is portability. C and C++ do not make specific guarantees of the size of int, long and short, while library designers often require it.

A common solution is to define their own aliases for data types, and changing the definitions based on the specific platform, making sure that the type of the appropriate size gets used.

This problem originated in C, and has been addressed by introduction of stdint.h header file (renamed to cstdint in C++). Including this header lets you declare types int32_t, int16_t, etc. However, libraries developed prior to introduction of stdint.h and libraries that are required to compile on platforms lacking this header are using the old workaround.

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
  • Much clearer now, thanks. Could you refer me to some site for a specific example, how would you define your own integer type which is let's say unsigned and 16 bits long? – Addy Jun 12 '15 at 12:39
  • 1
    @Addy This is usually done with conditional compilation `#ifdef` and `typedef`s in one of the headers. For OpenCL that's [cl_platform.h](https://www.khronos.org/registry/cl/api/1.1/cl_platform.h). Search the file for `cl_uint` to see how it is defined based on the platform. – Sergey Kalinichenko Jun 12 '15 at 12:44
3

By defining their own types they can safely rely on knowing that those types will always be the same size.

Types can vary from platform to platform, and compiler to compiler. Although the STL does provide <cstdint> some developers prefer their own definitions because they don't wish to use the STL.

Most of the time you can assume that int will be 32 bits in size, but it can change, and that's where some developers prefer to define their own reliable types on the of chance that it might not be.

Mark A. Ropper
  • 375
  • 3
  • 12