0

I am just starting to learn how to code(like 3 days ago) And I have a few problems... First, my "int", "long int", and "unsigned int" All appear to be the same size.

#include <iostream>
#include <limits.h>
using namespace std;

int main() {

    int value = 2147483647;
    cout << value << endl;

    cout << "Max int value: "<< INT_MAX << endl;
    cout << "Min int value: "<< INT_MIN << endl;
    long lvalue = 568534534;
    cout << lvalue << endl;

    short svalue = 22344;
    cout << svalue << endl;

    cout << "Size of int " << sizeof(int) << endl;
    cout << "Size of short int: " << sizeof(short) << endl;
    cout << "Size of long int: " << sizeof(long) << endl;
    cout << "Size of unsigned int: " << sizeof(unsigned int) << endl;
    unsigned int uvalue = 5345554;
    cout << uvalue << endl;
return 0;   
}

And when I run it, I get this:

568534534
22344
Size of int 4
Size of short int: 2
Size of long int: 4
Size of unsigned int: 4
5345554

As you can see. Long, and unsigned int = int.

This isn't my only problem. With "long double" no matter how big the number is, it ALWAYS outputs a negative. Here's the code:

#include <iostream>
#include <iomanip>

using namespace std;

int main() {

    float fvalue = 123.456789;

    cout << sizeof(float) << endl;
    cout << setprecision(20) << fixed << fvalue << endl;

    double dvalue = 234.5554;
    cout << setprecision(20) << fixed << dvalue << endl;

    long double lvalue = 123.4;
    cout << setprecision(20) << fixed << lvalue << endl;
    return 0;
}

Here's the result:

4
123.45678710937500000000
234.55539999999999000000
-18137553330312606000000000000000000000000000000000000

(Removed most of the zeros btw)

So, as you can see there are problems with something. I am using eclipse as my IDLE and I use MinGW32 Though I am on a 64 bit system, I tried to install MinGW-w64 for my system but couldn't find out how...

Clifford
  • 88,407
  • 13
  • 85
  • 165
Green_Program
  • 21
  • 1
  • 1
  • 2
    C++ leaves the size of integer types pretty much up to the implementors [Give this a read](http://en.cppreference.com/w/cpp/language/types#Integer_types). Note the C++ standard only specifies that the integer types be at least X bits. All of your integer types could be 64 bits. – user4581301 Jan 22 '17 at 07:43
  • 2
    You should really ask one question at a time. – juanchopanza Jan 22 '17 at 07:44
  • Note that in C++, you can use `std::numeric_limits` instead of those C macros like `INT_MAX`. – Christian Hackl Jan 22 '17 at 10:15

1 Answers1

3

For basic integral types (like int, long int, unsigned, unsigned short, unsigned long) the C++ standards articulate requirements on what values each type can represent, not the size. Note: This doesn't apply to "fixed width" types introduced into C from 99 and C++ (from 2011).

Practically, an int is required to be able to support (at least) all values in the range -32767 to 32767. The requirements on a short int (aka short) are the same. The requirements for a long int amount to a requirement to support the range -2147483647 to 2147483647.

However, importantly, these are minimum requirements - all such types are permitted to support a larger range of values. The sizes of those types, however, are implementation defined.

Practically, assuming binary representation, the requirements for int and short translate to needing 16 bits or more, including the sign bits, which equates to two 8-bit bytes, For long, the requirements translates to 32 bits or more, or four 8-bit bytes.

Note that most modern compilers use a binary representation (since modern target architectures are digital). However, the C++ standard does not actually require this - it simply states required ranges (as is the case here, for integral types) or limits on behaviour.

However, each integral type is permitted to represent a larger range of values (i.e. to exceed the minimum requirements that the standard states) so there is nothing stopping an implementation (compiler) from having a short that is 16 bits, an int that is 32 bits, and a long that is also 32 bits. Naturally code that relies on such values is not guaranteed to work with other implementations - for example, one with a 16-bit short, a 16-bit int, and a 32-bit long.

The same sort of discussion can be had for unsigned types, except that the ranges are different. However, practically, a unsigned short may be represented using 16 bits or more, a unsigned int may also be 16 bits or more, and a unsigned long by 32 bits or more.

As to your question about long double - some freely available implementations have had bugs in their support of long double that cause output like you describe. From memory, for example, some versions of MingW ports of gnu compilers - which is likely to be shipped with Eclipse. Check if there is an update for your compiler.

Side note: I've seen claims that relationships such as sizeof char <= sizeof short <= sizeof int <= sizeof long <= sizeof long long are required by recent C and C++ standards - as distinct from being a side effect of the ranges these types are required to be able to represent. I've also seen some claims suggesting that recent standards set requirements for the number of bits used to represent integral types. I haven't substantiated those claims (no reason to, since I never write code that relies on such relationships). Should I learn that these claims are actually true, I'll update the discussion above.

Peter
  • 35,646
  • 4
  • 32
  • 74