1

I am trying to figure out the rationale behind char being a different type from signed char and unsigned char.

I found the following posts, which give some rationale but don't answer my queston:

Difference between signed / unsigned char

Part of the reason there are two dialects of "C" (those where 'char' is signed, and those where it is unsigned) is that there are some implementations where 'char' must be unsigned, and others where it must be signed.

  • If, in the target platform's character set, any of the characters required by standard C would map to a code higher than the maximum signed char, then 'char' must be unsigned.

  • If 'char' and 'short' are the same size, then 'char' must be signed.

Is char signed or unsigned by default?

early in the life of C the "standard" was flip-flopped at least twice, and some popular early compilers ended up one way and others the other

I understand that unsigned char may be more efficient, but are there cases where signed char is more efficient?

Community
  • 1
  • 1
martinkunev
  • 1,364
  • 18
  • 39
  • 4
    Nothing there has anything to do with efficiency. The only thing which plays into efficiency is signed overflow being undefined. – Deduplicator Mar 12 '19 at 18:24
  • 1
    If your platform doesn't support unsigned byte instructions (for example) then unsigned char would be less efficient. – stark Mar 12 '19 at 18:27
  • I feel like this question should be moved to [Software Engineering](https://softwareengineering.stackexchange.com/) See [Before you choose a site…](https://meta.stackexchange.com/a/129632/235574) – Scott Solmer Mar 12 '19 at 18:31
  • @stark I was kind of wondering whether such a platform existed at the time the decision was made. I suppose it would need to be some platform that uses either ones' complement or sign-magnitude or a platform that implements overflow as saturation. – martinkunev Mar 13 '19 at 09:42
  • https://www.ragestorm.net/blogs/?p=34 – stark Mar 13 '19 at 13:33

0 Answers0