When need to buffer in memory some raw data, for example from a stream, is it better to use an array of char
or of unsigned char
? I always used char
but at work are saying it is better unsigned char
and I don't know why.
-
if it's a string stream then it'd be OK to use `char` array. for other numerical (for example, hexadecimal, bits) raw data, it's best to use `unsigned` variables so that you don't have to deal with the sign bit – Iosif Murariu Jun 12 '14 at 09:43
7 Answers
UPDATE: C++17 introduced std::byte
, which is more suited to "raw" data buffers than using any manner of char
.
For earlier C++ versions:
unsigned char
emphasises that the data is not "just" textif you've got what's effectively "byte" data from e.g. a compressed stream, a database table backup file, an executable image, a jpeg... then
unsigned
is appropriate for the binary-data connotation mentioned aboveunsigned
works better for some of the operations you might want to do on binary data, e.g. there are undefined and implementation defined behaviours for some bit operations on signed types, andunsigned
values can be used directly as indices in arraysyou can't accidentally pass an
unsigned char*
to a function expectingchar*
and have it operated on as presumed textin these situations it's usually more natural to think of the values as being in the range 0..255, after all - why should the "sign" bit have a different kind of significance to the other bits in the data?
if you're storing "raw data" that - at an application logic/design level happens to be 8-bit numeric data, then by all means choose either
unsigned
or explicitlysigned
char
as appropriate to your needs

- 102,968
- 15
- 177
- 252
Internally, it is exactly the same: Each element is a byte. The difference is given when you operate with those values.
If your values range is [0,255] you should use unsigned char
but if it is [-128,127] then you should use signed char
.
Suppose you are use the first range (signed char
), then you can perform the operation 100+100
. Otherwise that operation will overflow and give you an unexpected value.
Depending on your compiler or machine type, char
may be unsigned or signed by default:
Is char signed or unsigned by default?
Thus having char
the ranges described for the cases above.
If you are using this buffer just to store binary data without operating with it, there is no difference between using char
or unsigned char
.
EDIT
Note that you can even change the default char
for the same machine and compiler using compiler's flags:
-funsigned-char Let the type char be unsigned, like unsigned char.
Each kind of machine has a default for what char should be. It is either likeunsigned char by default or like signed char by default. Ideally, a portable program should always use signed char or unsigned char when it depends on the signedness of an object. But many programs have been written to use plain char and expect it to be signed, or expect it to be unsigned, depending on the machines they were written for. This option, and its inverse, let you make such a program work with the opposite default.
The type char is always a distinct type from each of signed char or unsigned char, even though its behavior is always just like one of those two.

- 1
- 1

- 27,044
- 8
- 36
- 62
-
1You assume `char` is signed. So the "range" and "overflow" parts are not necessarily true. – P.P Jun 12 '14 at 09:51
-
2"if it is [-127,127] use `char`." `char` might be unsigned too, if you need signedness, use `signed char`. "... give you a negative number." Maybe, maybe not, signed overflow is UB. – Baum mit Augen Jun 12 '14 at 09:53
-
@BaummitAugen It is true but in that case OP should not expect to get the desired value. – Pablo Francisco Pérez Hidalgo Jun 12 '14 at 09:56
As far as the structure of the buffer is concerned, there is no difference: in both cases you get an element size of one byte, mandated by the standard.
Perhaps the most important difference that you get is the behavior that you see when accessing the individual elements of the buffer, for example, for printing. With char
you get implementation-defined signed or unsigned behavior; with unsigned char
you always see unsigned behavior. This becomes important if you want to print the individual bytes of your "raw data" buffer.
Another good alternative for use for buffers is the exact-width integer uint8_t
. It is guaranteed to have the same width as unsigned char
, its name requires less typing, and it tells the reader that you are not intended to use the individual elements of the buffer as character-based information.

- 714,442
- 84
- 1,110
- 1,523
As @Pablo said in his answer, the key reason is that if you're doing arithmetic on the bytes, you'll get the 'right' answers if you declare the bytes as unsigned char
: you want (in Pablo's example) 100 + 100 to add to 200; if you do that sum with signed char
(which you might do by accident if char
on your compiler is signed) there's no guarantee of that – you're asking for trouble.
Another important reason is that it can help document your code, if you're explicit about what datatypes are what. It's useful to declare
typedef unsigned char byte
or even better
#include <stdint.h>
typedef uint8_t byte
Using byte
thereafter makes it that little bit clearer what your program's intent is. Depending on how paranoid your compiler is (-Wall
is your friend), this might even cause a type warning if you give a byte*
argument to a char*
function argument, thus prompting you to think slightly more carefully about whether you're doing the right thing.
A 'character' is fundamentally a pretty different thing from a 'byte'. C happens to blur the distinction (because at C's level, in a mostly ASCII world, the distinction doesn't matter in many cases). This blurring isn't always helpful, but it's at least good intellectual hygiene to keep the difference clear in your head.

- 11,978
- 2
- 33
- 56
It is usually better to use char
but it makes so little difference it does not matter. It's raw data so you should be simply passing it around as such rather than trying to work with it via char
pointers of one type or another. Since char
is the native data type it makes most sense to use this rather than imagining you are forcing your data into one type or another.

- 19,439
- 7
- 43
- 70
If you use unsigned char then it will take only valid ASCII characters as its range will become -127 to +127.
and you can find complete difference between char and unsigned char details in this question.
diff bet char and unsigned char
and you can see the table here.
If you are able to work with C++17 there is a std::byte type that is more appropriate for working with raw data. It only has bitwise logic operators defined for it.

- 410
- 6
- 14