1

Okay, I'm pretty new to those things. From what I've learned:

an int (signed int) is in range [−32,767; 32,767]
and a long long int (signed) is in range [-9,223,372,036,854,775,807; 9,223,372,036,854,775,807].

I also learnt that with malloc() function I can temporarily allocate memory for a variable.

I was thinking: if I use malloc() to allocate memory for a long long int, but with a larger size, can I store values bigger than 9,223,372,036,854,775,807 and smaller than -9,223,372,036,854,775,807?

chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256
DMaxter
  • 178
  • 5
  • 19
  • 2
    The range of `int` you give as (−32,767; 32,767) is for a 16-bit `int`, the minimum size allowed for `int`. The actual size of `int` is implementation defined. – Weather Vane Jul 07 '17 at 21:51
  • 1
    *No and Yes* -- like that? No you can't allocate a `type` larger than its size and store a number larger than the bit limit -- that is a `type` limitation (e.g. `int` (or `int32_t`) or `long long` (`int64_t`). However, you can use a Big Number Library (or hack one on your own) to rationally store parts of a number that exceeds the `type` size limits and use the combination as required. e.g. [**GNU Multiple Precision Arithmetic Library**](https://en.wikipedia.org/wiki/GNU_Multiple_Precision_Arithmetic_Library) is a common one. – David C. Rankin Jul 07 '17 at 21:51
  • Has nothing to do with `malloc` itself: is limited by the types available. If you want values larger than the biggest integer type available you need to either use `double` (with loss of significance) or a BigInt integer library, which will store large numbers in an array. – Weather Vane Jul 07 '17 at 21:57

2 Answers2

3

an int (signed int) is in range [−32,767; 32,767] and a long long int (signed) is in range [-9,223,372,036,854,775,807; 9,223,372,036,854,775,807].

Partially true. Those are the minimum ranges that the standard requires an int and long long int to be able to represent.

The ranges are formally "implementation defined" i.e. fixed by the implementation (compiler, host system, standard library, etc), although they may vary between implementations. An implementation is permitted, but not required, to support a larger range of values for both types.

I also learnt that with malloc() function I can temporarily allocate memory for a variable.

Also approximately true.

malloc() can be used to dynamically allocate memory, which will remain allocated until a corresponding call of free() or on program termination (assuming a modern operating system that releases memory resources on program termination).

I was thinking: if I use malloc() to allocate memory for a long long int, but with a larger size, can I store values bigger than 9,223,372,036,854,775,807 and smaller than -9,223,372,036,854,775,807

This is an incorrect conclusion.

The size of all types (except char types, which are defined to have a size of 1) is also implementation-defined.

Yes, it is possible to allocate more memory using malloc(). But something like

int *p = malloc(2*sizeof(int));    /* assume the allocation succeeds */

does not create a bigger int, and does not affect the range of values an int can represent. It dynamically creates an array of TWO integers, which can be accessed as p[0] and p[1] (or, equivalently, *p and *(p+1)) respectively.

Peter
  • 35,646
  • 4
  • 32
  • 74
  • So, there's no way I can the increase the range of an _int_ natively? I mean, are there no functions in C libraries (that are in all compilers) that could increase this range? – DMaxter Jul 07 '17 at 22:40
  • Natively? No. Some compilers have options that allow setting sizes of some types (which also affects the range of values they can represent) - not all compilers support that, and it often can't be controlled in source code. Generally it is better to pick suitable types (e.g. use `long long` instead of `int`) than to try to coerce the compiler to make `int` bigger. Some systems provide types (and library support for) representing larger values beyond what the C standard requires, but in C that requires doing things like `c = add(a,b)` rather than `c = a+b` – Peter Jul 07 '17 at 23:04
  • Thanks for your explanation – DMaxter Jul 07 '17 at 23:21
2

Sort of. The problem you're going to run into is that you don't have a type that fits the space. That means that your programming language doesn't have instructions on how to operate on numbers that large. Which means that you'll have to teach the language how to handle such a number, and you'll have to handle the fact that your CPU won't be able to handle the object natively.

To do so would require a lot of work on your part. This is a very simplified concept of what BigNum library is.

Jacobm001
  • 4,431
  • 4
  • 30
  • 51
  • HIPOTETICALLY: So, one solution would be rewriting the type _int_ of C language, in assembly? – DMaxter Jul 07 '17 at 22:43
  • @MoonWalker: No... the size of the default types in C are really closely related to the processing architecture, usually. – Jacobm001 Jul 07 '17 at 23:02
  • @MoonWalker: It's more of how you teach the processor to deal with larger numbers. A simple implementation is to use an array of ints that add numbers much like you're used to "carrying" overflows in arthritic. – Jacobm001 Jul 07 '17 at 23:04
  • Thanks for the explanation – DMaxter Jul 07 '17 at 23:21