9

When to use size_t vs uint32_t? I saw a a method in a project that receives a parameter called length (of type uint32_t) to denote the length of byte data to deal with and the method is for calculating CRC of the byte data received. The type of the parameter was later refactored to size_t. Is there a technical superiority to using size_t in this case?

e.g.

- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(uint32_t)length;

- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(size_t)length;
Boon
  • 40,656
  • 60
  • 209
  • 315
  • 5
    AFAICS, there'd be no meaningful reason to do this. A given CRC is defined to work at a particular word size, and that's exactly what `uint32_t` is. From a semantic point-of-view, `size_t` doesn't correspond to an explicit size. – Oliver Charlesworth Feb 23 '15 at 22:02
  • I'm guessing it's because you'd want `size_t` to be the largest data type the platform can support natively (i.e. you want a large range while remaining fast). E.g. on a 32 bit system, you'd want it to be 32 bits and on a 64 bit system, you'd want it to be 64 bits but you wouldn't want `size_t` to be a 64 bit type on a 32 bit system (or a `uint32_t` on a 16 bit system). Otherwise, it's probably just a good indicator to show that the parameter represents some sort of size quantity. I don't know why they specifically chose to use `size_t` here though. – tangrs Feb 23 '15 at 22:07
  • Thanks all - I added more details to the question. – Boon Feb 23 '15 at 22:17
  • @tangrs so per your rationale, is it better to use size_t over NSUInteger and are they technically the same? – Boon Feb 23 '15 at 22:18
  • 4
    Ok, with the added context, this makes a lot more sense. The `size_t` isn't being used for the CRC maths itself, it's just the loop bound. That's totally reasonable. – Oliver Charlesworth Feb 23 '15 at 22:28

1 Answers1

10

According to the C specification

size_t ... is the unsigned integer type of the result of the sizeof operator

So any variable that holds the result of a sizeof operation should be declared as size_t. Since the length parameter in the sample prototype could be the result of a sizeof operation, it is appropriate to declare it as a size_t.

e.g.

unsigned char array[2000] = { 1, 2, 3 /* ... */ };
uint16_t result = [self calculateCRC16FromBytes:array length:sizeof(array)];

You could argue that the refactoring of the length parameter was pointlessly pedantic, since you'll see no difference unless:
a) size_t is more than 32-bits
b) the sizeof the array is more than 4GB

user3386109
  • 34,287
  • 7
  • 49
  • 68
  • **BUT**: You may run on a system where `size_t` is 16 or 64 bits. It's size is implementation defined. `uint32_t`, OTOH, is guaranteed(?) 32 bits wide. A CRC has a specified size. Running a 32 bit CRC on a 16 bit `size_t` will cause problems. – Cole Tobin Feb 23 '15 at 23:17
  • @ColeJohnson Yes and no. First please note that it was the `length` parameter that was changed. You can apply a CRC to any `length` of data. So the only question is whether the `length` of the input should be specified as a 32-bit number always, or should be specified as a `size_t`. You could argue that `size_t` won't work if a) size_t is 16-bits b) the buffer is more than 64KB. However, that assumes that there are 16-bit systems that run objective-C code. – user3386109 Feb 23 '15 at 23:30
  • 1
    size_t will always be a type that can hold the size of the largest array your system can handle, so even that last argument is moot. – Darryl Jan 12 '23 at 18:20