2

In C, there's the sizeof operator to determine the byte-size of a given data type or object.

Likewise, there's CHAR_BIT from <limits.h> which is defined to reflect the number of bits in a byte.

Now this might be slightly hypothetical, but how do I tell the number of different values that the smallest unit of information can store, i.e. whether the host environment provides bits, trits, nats or whatever.

Answer

Apparently, the C standard assumes that the host environment operates on bits. Such a bit is required to be able to store at least two values.

Notable proposals that arose from this question

Name of the smallest unit of information of a ternary machine: a TIT
Name of the smallest unit of information of a quaternary machine: a QUIT

Philip
  • 5,795
  • 3
  • 33
  • 68
  • `CHAR_BIT` should reflect the number of bits in a `char`, not a `byte`. A byte is always 8-bit. – Lenik Mar 27 '11 at 10:02
  • 1
    A byte is more accurately defined as the amount of storage used t hold a character, which is usually eight bits but does not have to be. – templatetypedef Mar 27 '11 at 10:07
  • 1
    In C, a byte "is composed of a contiguous sequence of bits, the number of which is implementation-defined." templatetypedef is right once again, a byte also is an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment." A character is defined to be one byte. – Philip Mar 27 '11 at 10:14

3 Answers3

9

I think by definition a bit is a binary digit which must be zero or one, so the answer is always two (either the bit is 0 or 1).

EDIT: in response to your new question, I believe that there is no standard way to do this. The C ISO spec (N1124, §3.5/1) defines a bit as

A unit of data storage in the execution environment large enough to hold an object that may hold one of two values.

Since the C spec tries to maximize the portability of the language, it doesn't specify what a bit is beyond this point. This means that from within C, you cannot tell any more than this about the size of a bit.

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
3

The term BIT is a contraction of B-inary dig-IT, so by definition it has exactly two possible states. There is no ambiguity or implementation defined behaviour, just mathematical certainty.

Clifford
  • 88,407
  • 13
  • 85
  • 165
  • duplicate of templatetypedef's answer, so here's a duplicate of my comment: You're right, but that's not what I wanted to know. I rephrased the question. (I used to ask for the size of a bit) – Philip Mar 27 '11 at 09:57
  • 2
    @Philip: It is not a duplicate. @Template said *"I think"*, whereas I am *certain*. But more importantly I explained the derivation of the term BIT the knowledge of which implies the answer (to your original question). In the end C is not defined for hypothetical computers. If you were working on say a *ternary* machine, then presumably whatever language were designed to run on it would have three-state digit values. I rather like the possibility that such a digit would be called a TIT, or even a *quaternary* system having a QUIT. – Clifford Mar 27 '11 at 10:35
  • +1 for QUIT. May also prove to be a common misspelling for qubit. – Philip Mar 27 '11 at 10:41
  • 1
    Took me a few minutes to realize that TIT is not a spelling error :( – Philip Mar 27 '11 at 10:47
  • @Philip I don't care for the idea of Trinary digIT. My company's web filter is too aggressive to ever search for relevant resources. – corsiKa Mar 27 '11 at 22:17
1

You can declare a struct to hold a single bit:

typedef struct _bit_t {
    int bit: 1;
} bit_t;

Well, sizeof(bit_t) may get 1 or 4 because of alignment, I'm not sure.

Generally, a byte should be the smallest integer type. You should use byte always for that purpose, to make your program portable. If you don't care about portability at all, e.g., you are writing 8051 or PIC programs, then you can just use the bit type, it's nothing to do with byte.

To declare a byte, you can safely declare it as unsigned char, currently, I don't know any C compiler whose char isn't 8-bit. (Any exception? I'd like to hear about it.)

Lenik
  • 13,946
  • 17
  • 75
  • 103
  • 1
    My copy of the standard says that CHAR_BIT shall be *at least* 8. However, I don't know of any contemporary machine where a byte is larger than 8 bits. But my guess is that (a) there are more than 0 such machines and (b) we'll never hear from those if I don't keep asking silly questions on SO. – Philip Mar 27 '11 at 10:39
  • @Philip: The place to look for modern "machines" with `CHAR_BIT > 8` is DSPs. In particular, a board with both a DSP and a main processor might offer a different C compiler for each (or same compiler, different output architecture and different CHAR_BIT). If you find any interesting ones, add them here: http://stackoverflow.com/questions/2098149/what-platforms-have-something-other-than-8-bit-char – Steve Jessop Mar 27 '11 at 12:14