1

I just started programming in the C prog. language and want to get the data type-size-value subject straight.

I've seen a few value-range tables of different data types (such as this one).

The thing is, I've learned and read here that there are different parameters which influence the size of each data type, and my assumption is that the value-range should vary as well.

For example, if 1 byte = 16 bit then I'd think signed char could hold 0-65535.

How accurate are those tables? Is the range they show guaranteed (but the types could actually hold also smaller\larger values)?

Paz
  • 737
  • 7
  • 22
  • I've never come across any platform where `char` is anything else than 8 bits. I know there are and have been, but today you will have have a **very** hard time finding them. – Some programmer dude Oct 19 '13 at 14:09
  • I'm not quite sure what you're asking, or what you mean by 'promised but not exact'. Can you perhaps clarify your question? – Baldrick Oct 19 '13 at 14:09
  • 1
    Also, in C it's specified that `sizeof(char)` is **always** `1`, no matter its bit-size. – Some programmer dude Oct 19 '13 at 14:10
  • @Baldrick I edited it. Hopefully it's clearer now. – Paz Oct 19 '13 at 14:13
  • 1
    @JoachimPileborg the size is 1 byte, but that doesn't give us any info on the values char can hold. Right? – Paz Oct 19 '13 at 14:14
  • Generally, your best bet is to look at the 'limits.h' header file for the compiler and platform you're working on. That will give you all the information you need. The C standard generally mandates a minimum in most cases, so checking this file will tell you how far your platform goes beyond the minimum. – Baldrick Oct 19 '13 at 14:18
  • @user2190298 No, a byte is always 8 bits, but a `char` may be more or less. So just because `sizeof(char)` is one, that doesn't mean it is one byte. – Some programmer dude Oct 19 '13 at 14:20
  • I recommend you look at [this reference](http://en.cppreference.com/w/cpp/language/types). While it's for C++, it's basically the same in C. – Some programmer dude Oct 19 '13 at 14:21
  • 2
    @JoachimPileborg: IIRC there are DSP chips with CHAR_BIT=16 (or 12 or 16 or 24 or 32, I don't remember) – wildplasser Oct 19 '13 at 14:55
  • 1
    @JoachimPileborg: You're getting the wrong assumption. A byte can have 8 or any number of bits, but a char is always 1 byte in C. Look at the answers [here](http://stackoverflow.com/a/4839654/995714) – phuclv Oct 19 '13 at 15:07
  • 1
    @JoachimPileborg: If a byte is always 8 bits then when char == 9 bit, sizeof(char) will return how many bytes? sizeof cannot return a float value http://stackoverflow.com/a/2215596/995714 – phuclv Oct 19 '13 at 15:14
  • @LưuVĩnhPhúc `sizeof(char)` always returns `1` no matter how many bits are in a `char`. I've been saying it all along. – Some programmer dude Oct 19 '13 at 15:33
  • 1
    @JoachimPileborg: No, that the C standard requires char to be 1 byte, and a byte have CHAR_BIT. This has been discussed many times on stackoverflow – phuclv Oct 19 '13 at 15:34

4 Answers4

4

The C language specification doesn't define any exact range for each data type. It only defines a minimum value that a particular type should be able to hold.

Coming to your question on that table, it's NOT the accurate representation of ranges for defined by C. It may be true on a particular platform that the author was running it on. But it can't always be (and shouldn't be) taken as the authoritative source.

If you want to know the exact range on your platform, look at(or include) <limits.h>. Or you can use sizeof operator on the types to get the information from compiler.

If you are want to know the exact number of bits then use CHAR_BIT defined in <limits.h>.

For example, the number of bits in an int can be found using: CHAR_BIT * sizeof(int).

In same way for a given type T, number of bits can be found: CHAR_BIT * sizeof(T).

Also read the first 3 or 4 questions from the C-FAQ which are quite relevant to your question.

P.P
  • 117,907
  • 20
  • 175
  • 238
0

The minimum range shown there must be available. It is the minimum guaranteed by the standard all conforming implementations will supply at least that.

West
  • 722
  • 7
  • 16
0

C is a "close-to-metal" language, therefore some things (like size of int) depend on the particular architecture you're compiling for. It is always know before your program leaves your hands, so you can easily take care of it with sizeof and #defines

Tables found anywhere are only for reference. You can depend only on what's visible to the compiler.

Agent_L
  • 4,960
  • 28
  • 30
  • Not the downvoter but How it is answering OP's question? Also just curious to know **C is a "close-to-metal" language**? – Rahul Tripathi Oct 19 '13 at 14:25
  • The size of a given data type has no direct correlation to the processor. My `int` can be 32-bits on 64-bit platforms. My `longs` can be 32-bit on 64-bit platforms. There are minimums to follow and progressive rules, but the compiler is the arbiter of what happens. – Joe Oct 19 '13 at 14:25
  • @Joe It was a simplification. I've removed it if you say it's too big one. – Agent_L Oct 19 '13 at 14:32
0

Your thought process is more or less correct. These tables are generally reliable because they are easy calculations to do given that you know their size.

Chars will always ever be one byte big (which is 8 bits, not 16), and that one byte will only ever have 2^8=256 possible combinations, so the range of a char will only ever be 0 to 255 or -128 to 127 depending on whether it's signed or not.

For our other integers, the same logic applies. The only difference here is that the size of these types are dependent on the operating system that you compile for (which the table acknowledges, giving different ranges for an int of 2 bytes and an int of 4 bytes).

There are no other parameters that will affect the values these types can hold besides their size in bytes though, and if you are doing something that is dependent on their size (like integer overflow) you should be using sizeof() to check for that.

zo7
  • 126
  • 1
  • 2
  • 2
    I disagree, a byte is composed of a contiguous sequence of bits (minimum 8 bits), the number of which is implementation-defined. `char` is always 1 byte, so the range of a `char` is also implementation-defined (in `limits.h`) – David Ranieri Oct 19 '13 at 15:10
  • 1
    Right, the C standard defines "byte" as "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment", and differs from the more general definition of the term, and can be more than 8 bits. – Crowman Oct 19 '13 at 15:21