First of all, what do I mean, by 'correct definition`?
For example, K&R in "C Programming Language" 2nd ed., in section 2.2 Data Types and Sizes, make very clear statements about integers:
- There are
short
,int
andlong
for integer types. They are needed to repesent values of different boundaries.int
is a "naturally" sized number for a specific hardware, so also probably the most fastest.- Sizes for integer types
short
,int
andlong
are purely implementation-dependent.- But they have restrictions.
short
andint
shall hold at least 16 bits.long
shall hold at least 32 bits.short
>=int
>=long
.
That's very clear and unambiguous. And that is not the case for size_t
type. In K&R 5.4 Address arithmetic, they say:
- ...
size_t
is the unsigned integer type returned by thesizeof
operator.- The
sizeof
operator yields the number of bytes required to store an object of the type of its operand.
In C99 standard draft, in 6.5.3.4 The sizeof operator, they say:
- The value of the result is implementation-defined, and its type (an unsigned integer type) is
size_t
, defined in<stddef.h>
(and other headers).
In 7.17 Common definitions :
size_t
which is the unsigned integer type of the result of the sizeof operator;
In 7.18.3 Limits of other integer types:
- limit of size_t
SIZE_MAX
65535
There is also a useful article - Why size_t matters. It says the following:
- Okay, let's try to imagine, what it would be if there would be no
size_t
.- For example, let's take
void *memcpy(void *s1, void const *s2, size_t n);
standard function from<string.h>
- Let's use
int
instead ofsize_t
forn
parameter.- But size of memory can't be negative, so let's better take
unsigned int
.- Good, seems like we are happy now and without
size_t
.- But
unsigned int
has limited size - what if there is some machine, that can copy chunks of memory larger thanunsigned int
can hold?- Okay, let's use
unsigned long
then, now we are happy?- But for those machines, which operate with smaller memory chunks,
unsigned long
would be inefficient, becauselong
is not "natural" for them, they must perform additional operations to work withlong
s.- So let's why we need
size_t
- to represent a size of memory, that particular hardware can operate at once. On some machines it would be equal toint
, on others - tolong
, depending on with which type they are most efficient.
What I understand from it is that size_t
is strictly bounded with sizeof
operator. And therefore size_t
represents a maximum size of an object in bytes. It might also represent a number of bytes that particular CPU model can move at once.
But there is still much of mystery for me here:
- What is "object" in terms of C?
- Why it's limited to 65535, which is maximum number, that could be represented by 16 bits? The article on embedded.com says, that
size_t
could be 32 bit too. - K&R says, that
int
has "natural" size for the platform, and it can be equal toint
or tolong
. So why not use it instead ofsize_t
if it's "natural"?
UPDATE
There is similar question:
But the answers for it doesn't provide clear definition or links to authoritative sources (if not count Wikipedia as such).
I want to know when to use size_t
, when not to use size_t
, why it was introduced, and what it really represents.