21

I have a question about the ranges of ints and floats:

If they both have the same size of 4 bytes, why do they have different ranges?

atul
  • 235
  • 1
  • 3
  • 6

6 Answers6

23

They are totally different - typically int is just a straightforward 2's complement signed integer, while float is a single precision floating point representation with 23 bits of mantissa, 8 bits exponent and 1 bit sign (see http://en.wikipedia.org/wiki/IEEE_754-2008).

Paul R
  • 208,748
  • 37
  • 389
  • 560
  • 2
    +1 Of course, you know better than me those are just the usual representations. – cnicutar Aug 16 '11 at 14:14
  • Indeed - since it was evidently a noob question I thought it best to keep things simple. – Paul R Aug 16 '11 at 14:17
  • @cnicutar Are you implying that a `float` can be represented arbitrarily? – Andreas Brinck Aug 16 '11 at 14:19
  • 1
    @Andreas Brinck I am implying that the standard doesn't say those **have** to be the representations. – cnicutar Aug 16 '11 at 14:40
  • 1
    @Andreas, cnicutar: To clarify, the C standard doesn't require the implementation to use IEEE-754. – Oliver Charlesworth Aug 16 '11 at 15:19
  • 1
    @Andreas Brinck - there are plenty of floating point formats apart from IEEE 754; back in the late Cretaceous I worked on a VAX, which used the VAX F format. The C language standard doesn't care about the specific float format used by an implementation, it just mandates the minimum range and precision. – John Bode Aug 16 '11 at 17:23
18

They have different ranges of values because their contents are interpreted differently; in other words, they have different representations.

Floats and doubles are typically represented as something like

+-+-------+------------------------+
| |       |                        |
+-+-------+------------------------+
 ^    ^                ^
 |    |                |
 |    |                +--- significand
 |    +-- exponent
 |
 +---- sign bit

where you have 1 bit to represent the sign s (0 for positive, 1 for negative), some number of bits to represent an exponent e, and the remaining bits for a significand, or fraction f. The value is being represented is s * f * 2e.

The range of values that can be represented is determined by the number of bits in the exponent; the more bits in the exponent, the wider the range of possible values.

The precision (informally, the size of the gap between representable values) is determined by the number of bits in the significand. Not all floating-point values can be represented exactly in a given number of bits. The more bits you have in the significand, the smaller the gap between any two representable values.

Each bit in the significand represents 1/2n, where n is the bit number counting from the left:

 110100...
 ^^ ^
 || |  
 || +------ 1/2^4 = 0.0625
 || 
 |+-------- 1/2^2 = 0.25
 |
 +--------- 1/2^1 = 0.5
                    ------
                    0.8125

Here's a link everyone should have bookmarked: What Every Computer Scientist Should Know About Floating Point Arithmetic.

John Bode
  • 119,563
  • 19
  • 122
  • 198
4

Two types with the same size in bytes can have different ranges for sure.

For example, signed int and unsigned int are both 4 bytes, but one has one of its 32 bits reserved for the sign, which lowers the maximum value by a factor of 2 by default. Also, the range is different because the one can be negative. Floats on the other hand lose value range in favor of using some bits for decimal range.

John Humphreys
  • 37,047
  • 37
  • 155
  • 255
  • "Floats [...] lose value range" makes it sound as if the largest number representable by a floating point number was smaller than the largest number representable by an integer number of the same size, which is definitely not true. – sepp2k Aug 16 '11 at 15:29
1

The standard does not specify the size in bytes, but it specifies minimum ranges that various integral types must be able to hold. You can infer minimum size in bytes from it.

Minimum ranges guaranteed by the standard (from "Integer Types In C and C++"):

signed char: -127 to 127
unsigned char: 0 to 255
"plain" char: -127 to 127 or 0 to 255 (depends on default char signedness)
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615

Actual platform-specific range values are found in in C, or in C++ (or even better, templated std::numeric_limits in header).

Standard only requires that:

sizeof(short int) <= sizeof(int) <= sizeof(long int)

float does not have the same "resolution" as an int despite their seemingly similar size. int is 2's complement whereas float is made up of 23 bits Mantissa, 8 bits of exponent, and 1 bit of sign.

  • 8
    This doesn't seem to be answering the actual question ? – Paul R Aug 16 '11 at 14:11
  • `float` is not an integral type – pmg Aug 16 '11 at 14:13
  • @pmg: I know. Read the whole answer. –  Aug 16 '11 at 14:13
  • 1
    the range of float is 3.4E +/- 38 (7 digits) in Visual C and int have –2,147,483,648 to 2,147,483,647 and both are of 4 bytes then why there is so large range difference and how ? acctually this is what I want to know – atul Aug 16 '11 at 14:15
  • 2
    @Monkey: OK - you've edited it now to add a sentence about floats, but the original answer did not even mention floats. – Paul R Aug 16 '11 at 14:15
  • @Paul : It was in there, it was just a long answer that probably through everyone off. It was there as a teaching point. –  Aug 16 '11 at 14:16
  • 2
    Evidently a number of other people saw the original answer prior to the edit when it didn't have the sentence about floats. No matter though, since you fixed it. You might want to take out most of the irrelevant stuff about different integer sizes and ranges though. – Paul R Aug 16 '11 at 14:20
  • @atul: See my last sentence. They may be the same size but are made up differently. –  Aug 16 '11 at 14:40
1

You are mixing the representation of a number, which is dependent upon some rules you (or somebody else) defines, and the way you use to keep the number in the computer (the bytes).

For example, you can use only one bit to keep a number, and decide that 0 represents -100, and 1 represents +100. Or that 0 represents .5 and 1 represents 1.0. The 2 things, the data and the meaning of the data, are independent.

Itamar Katz
  • 9,544
  • 5
  • 42
  • 74
1

An integer is just a number... It's range depends on the number of bits (different for a signed or unsigned integer).

A floating point number is a whole different thing. It's just a convention about representing a floating point number in binary...

It's coded with a sign bit, an exponent field, and a mantissa.

Read the following article:

http://www.eosgarden.com/en/articles/float/

It will make you understand what are floating point values, from a binary perspective. The you'll understand the range thing...

Macmade
  • 52,708
  • 13
  • 106
  • 123
  • To be pedantic, C integral types are also, like floating-point types, just "a convention about representation". Also, you have a circular definition in your second paragraph! – Oliver Charlesworth Aug 16 '11 at 15:20