0

So, where can unsigned char be useful? If I understood right, unsigned char can represent numbers from -128 to 127. But every encoding table uses positive numbers. So, unsigned char can't be used for representing characters. Am I right?

  • 1
    `unsigned char` represents `0 - 255`. You're thinking of regular `char`, which is signed. – wkl Mar 20 '14 at 19:26
  • unsigned means it does not have a sign. `-` is a sign. Thus it is only positive from 0 to 255. – Hogan Mar 20 '14 at 19:27
  • 1
    See the question and answers at http://stackoverflow.com/questions/2054939/char-is-signed-or-unsigned-by-default for some more insights about this. – Floris Mar 20 '14 at 19:28
  • @birryree: Isn´t the standard char implementation-defined? – deviantfan Mar 20 '14 at 19:30
  • My mistake. I asked about signed char, of course. – user3379285 Mar 20 '14 at 19:31
  • @deviantfan -- it is, but if you go by gcc then signed is right. – Hogan Mar 20 '14 at 19:32
  • @deviantfan - yes, whether or not `char` is default-`signed` is implementation specific, but I don't think you'll find a common implementation that doesn't default to it being signed. From: C99 standard (R2005) S 5.2.4.2.1, note 2. – wkl Mar 20 '14 at 19:55
  • `char` defaults to unsigned for ARM9 implementations I have used. I also read that Andriod NDK has char unsigned too. In GCC you can control it with `-funsigned-char` or `-fsigned-char`. You should write code that does not rely on this setting to work. – M.M Mar 20 '14 at 22:04

5 Answers5

1

No, unsigned char is 0 to 255.

It can be useful in representing binary data (a single byte), although, like any primitive data type, the possibilities are endless.

Gigi
  • 28,163
  • 29
  • 106
  • 188
1

First of all, what you are representing is signed char, unsigned char ranges from 0 - 255.

To answer your questions about negative valued character, you are right that character encoding is done using positive values.

On a different view, just think of signed and unsigned char as integer representation.

Shamim Hafiz - MSFT
  • 21,454
  • 43
  • 116
  • 176
  • So, signed char is not good for representing characters, because it can only represent characters with numbers 0-127 in encoding table, right? – user3379285 Mar 20 '14 at 19:34
  • @user3379285: no, it still can represent 256 characters. Why do you think "-1" cannot be a valid character encoding? (Hint: it depends on where you start to count.) – Jongware Mar 20 '14 at 20:04
0

Unsigned char is used to represent bytes. If you need just one byte of memory in a variable, you use unsigned char and assign an integer to it.

fo example, there is used uint8_t to represent bytes, but is not more than that.

eventHandler
  • 1,088
  • 12
  • 20
0

A signed char can represent number from -128 to +127
and unsigned char is from 0 to 255.

Altough unsigned is more convenient in many use cases,
everthing binary-related can be done with signed too:
0=0, 1=1 ... 127=127, -128=128, -127=129, -126=130 ... -1=255
Such conversions happens automatically (or, better to say,
it´s just different interpretation).

("binary-related" means that a mathematical -2 * 2 would be possible too with unsigned,
but make even less sense)

deviantfan
  • 11,268
  • 3
  • 32
  • 49
0

Regarding So, where can unsigned char be useful?

Here perhaps?: (a very simple example to test for ASCII digit)

BOOL isDigit(unsigned char c)
{
    if((c >= '0') &&(c <= '9')) return TRUE;
    return FALSE;
}  

By virtue of argument type unsigned char guarantees input will be a single ASCII character (there are 128 encoded ASCII possibilities, with Extended ASCII, there are 255 possibilities). So, in this function, all that remains is to test input value for specific criteria (in this case is it a digit) There is no requirement for function to test for negative numbers. A regular char (i.e. signed) cannot contain the entire range of ASCII characters. The sizeof unsigned char is also significant in that it is only 1 byte as opposed to 4 bytes (typically, but not always) for say, an int

ryyker
  • 22,849
  • 3
  • 43
  • 87
  • 1
    [ASCII](http://en.wikipedia.org/wiki/ASCII) encodes 128 specified characters. `signed char` supports at least 0 to 127. C specifies the minimum range of `int` as -32767 to 32767. Don't know of any C spec that says `int` is always 4 bytes. The posted `isDigit()` test works regardless of character coding - it does not need to be ASCII, just C compliant. – chux - Reinstate Monica Mar 20 '14 at 20:00
  • @chux - Thanks, you are right on both counts. see edits if interested. – ryyker Mar 20 '14 at 20:05
  • Certainly ASCII is the pervasive character encoding set for C code. Look forward to the day C specifies that. Until then, a test for a "C digit" should be `(c >= '0') && (c <= '9')` and a test for an "ASCII digit" should be `(c >= 48) && (c <= 57)`. These of course are the same if the code is written in ASCII, but may differ if in some esoteric coding. – chux - Reinstate Monica Mar 20 '14 at 20:11
  • Your code example works just as well for plain or signed `char` . But it would fail if `'0'` was a negative value - I think this is theoretically possible (although nobody ever actually did it) . – M.M Mar 20 '14 at 22:06
  • @MattMcNabb - The logic would indeed work for both signed and unsigned char, however, that was not the point I was making. By using unsigned as the argument type, I negate the need to vet the input against a negative value. That is one of the points I made in my post code explanation. – ryyker Mar 21 '14 at 00:56
  • I don't know what you mean by "vet the input against a negative value". If your function used plain `char` then it is guaranteed to work on all systems, but as written, it would fail on a system where `'0'` is negative. (Actually `'0'` is negative in EBCDIC if `char` is signed, so such systems might in fact exist). – M.M Mar 21 '14 at 03:32