-1

If I allocated lots (GBs) of memory using:

int N = ...;
int * array_ = new int[N];

And use a fraction of the array, what kind of drawback this method has, other than the obvious fact, I am wasting memory? Does it impact cpu performance or makes the program unstable?

The reasoning behind this, is to avoid using the vector class, due to the big performance hit in intense applications.

SkyRipper
  • 155
  • 5
  • 15
  • 2
    vector doesn't give a massive performance hit – deeiip Dec 12 '13 at 09:52
  • Why don't you compare your method with using a vector? – Sash Dec 12 '13 at 09:54
  • Well, it gives because when adding a new item the vector class copies all of the array data to a new array and then deletes the old array. Benchmarks show at least 50% hit, 100% in some cases. – SkyRipper Dec 12 '13 at 09:54
  • Since you can't re-size a raw array anyway your objection would appear to be moot - why not just allocate a large vector so you don't have to re-size it ? – Paul R Dec 12 '13 at 09:55
  • 2
    @SkyRipper it gives because when adding a new item the vector class copies all of the array data to a new array and then deletes the old array- No it may not do this always. You can avoid reallocation using vector::reserve() too – deeiip Dec 12 '13 at 09:55
  • @PaulR: I can do this, but then the ram is allocated and this brigs up the question: what is the drawback to allocate tons of ram? – SkyRipper Dec 12 '13 at 09:58
  • @SkyRipper That you can't use it elsewhere? –  Dec 12 '13 at 09:58
  • Yes, other than that? – SkyRipper Dec 12 '13 at 09:59
  • In other words, will a computer with 4/32GB filled run faster than one with 26/32GB? – SkyRipper Dec 12 '13 at 10:00

5 Answers5

5

Depending on how the OS and (perhaps) the program respond to low memory situations, wasting memory might make the program slow or unstable. No guarantees.

On a modern OS a large but mostly-unused dynamic array of int should have very little or no impact. Most of the unused part of the array will only ever be assigned virtual memory space, it will never be backed by RAM or swap. 64 bit OSes (which you must be using if you're talking about 32GB of RAM) won't be short of virtual address space until you use up 248 bytes with these things.

The reasoning behind this, is to avoid using the vector class, due to the big performance hit in intense applications.

There likely is a big performance hit for creating a vector<int> larger than you need, since it will be initialized whereas this array is uninitialized. If that's what you mean, then your code should cause no more instability than a huge vector, and possibly less because the memory is never touched.

If it wasn't for that then there should be no big performance hit from vector with optimization enabled. So you could for example work around it by using a vector of struct UninitializedInt { int value; UninitializedInt() {} }; to ensure no-op default construction. You might like to add an int constructor and/or an operator int() to make user's lives easier (prevent typing .value all over the place), although that leads to ambiguous arithmetic operators so it's not a slam-dunk.

Or perhaps you could use reserve() to allocate space for the vector and then resize() or push_back() or insert() as needed. If that ends up with you in effect checking the bounds or modifying the size on every access then you would just replace one performance hit with another, of course.

Your code should work though. Provided you don't need to re-implement too much of the interface of vector it might be the lowest-hassle way to eliminate that initialization overhead. Of course you need to make sure you free it properly. For example:

std::unique_ptr<int[]> array_(new int[N]);
Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
4

And use a fraction of the array, what kind of drawback this method has, other than the obvious fact, I am wasting memory? Does it impact cpu performance or makes the program unstable?

Your code will be difficult to maintain and exception-unsafe. (Yes, it will. No, really.)

The reasoning behind this, is to avoid using the vector class, due to the big performance hit in intense applications.

This is false in any respectable implementation of C++. std::vector has zero overhead. The compiler is better at optimising than you are, and can inline member functions and whatnot.


Regarding your comment:

Well, it gives because when adding a new item the vector class copies all of the array data to a new array and then deletes the old array. Benchmarks show at least 50% hit, 100% in some cases.

See std::vector::reserve.

  • +1 for "in any respectable implementation of C++. std::vector has zero overhead. The compiler is better at optimising than you are" – deeiip Dec 12 '13 at 09:58
  • 1
    "`std::vector` has zero overhead" is false in this use case. My answer contains a description of a large overhead. – Steve Jessop Dec 12 '13 at 10:14
0

The only really reliable way to find this out is by actually doing it and running performance tests. Try doing it all the ways you're thinking of and compare and contrast based on actual data recorded from these real tests. But that aside, an educated guess would be that you're greatly overestimating the performance hit of std::vector.

Paul Evans
  • 27,315
  • 3
  • 37
  • 54
0

This has already been addressed here.

And here is the answer:

It is always better to use std::vector/std::array, at least until you can conclusively prove (through profiling) that the T* a = new T[100]; solution is considerably faster in your specific situation. This is unlikely to happen: vector/array is an extremely thin layer around a plain old array. There is some overhead to bounds checking with vector::at, but you can circumvent that by using operator[].

Community
  • 1
  • 1
0

The reasoning behind this, is to avoid using the vector class, due to the big performance hit in intense applications.

As far as I know there is no big performance hit with using vectors. Unless you can be more specific, your entire argument is invalid (i.e. you should use vectors).

There may be performance overheads from bad uses of vector though (the problem then is not the vector but your client code). If you do use vector, consider using reserve. Otherwise, try a std::forward_list or std::list (if you need to add elements inconsistently).

utnapistim
  • 26,809
  • 3
  • 46
  • 82