1

A previous post leaves the answer to this question as unresolved. Yet, my intuition tells me that larger allocations will take longer because it will take longer for underlying allocation strategies to find larger contiguous blocks. The following C++ program, which outputs this:

Total time for size 1048576: 5.50781
Total time for size 2097152: 6.08594
Total time for size 4194304: 6.07031
Total time for size 1048576: 5.48438
Total time for size 2097152: 6.05469
Total time for size 4194304: 6.07031

confirms that larger allocations take longer. Can anyone confirm the outcome of this experiment? Is the intuition I stated above correct?

#include <ctime>
#include <iostream>

int main()
{
    // Initialize parameters
    int size = 1024*1024;
    int sizes[3] = {size, 2*size, 4*size};
    int nIter = 2000;

    // Repeat the entire experiment twice
    for (int count=0; count<2; count++)
    {
        // Loop though all allocation sizes
        for (int i=0; i<sizeof(sizes)/sizeof(int); i++)
        {
            // Start timer
            const clock_t begin_time = clock();

            // Total of nIter iterations
            for (int j=0; j<nIter; j++)
            {
                // Allocate array
                char *array = new char[sizes[i]];

                // Loop through part of the array
                for (int k=0; k<size; k++)
                    array[k] = 1;

                // Free array
                delete [] array;
            }

            // Report total time
            const float total_time = float(clock()-begin_time)/CLOCKS_PER_SEC;
            std::cout << "Total time for size " << sizes[i] << ": " << total_time << std::endl;
        }
    }
}
  • 1
    "Time complexity" is a property of an algorithm. You cannot measure it by doing a bunch of tests; you have to study the algorithm itself. And that algorithm will be implementation-dependent. So this question, as posed, cannot effectively be answered. – Nicol Bolas Oct 27 '22 at 16:13
  • 1
    This code measures the time it takes to set the values of all the elements of the array to 1. Clearly, the larger the array, the longer it takes. It does **not** measure the time needed for the allocation. – Pete Becker Oct 27 '22 at 16:17
  • 2
    You are filling the memory, which obviously takes 4 times as long to fill 4 times as much memory. You measure 4 times the as long for 4 times the allocation size. How does this say anything about the allocator instead of the memset performance? – Homer512 Oct 27 '22 at 16:18
  • 4
    Also, `char *array = new char[sizes[i]];` should be freed with `delete [] array;`, not `delete array;`. For types that have trivial destructors you might get away with using the wrong form of `delete`, but don't do that. – Pete Becker Oct 27 '22 at 16:18
  • 2
    Note that once you've acquired and freed memory, the implementation is allowed to hold on to that memory and reissue it for later allocations instead of asking the OS for more pages. Additionally, the time it takes to allocate memory can depend on the current state like fragmentation. Repeatedly allocating and freeing memory blocks of various sizes won't give you a good measurement of how long an allocation takes, since the past pattern of memory operations can impact the performance of present and future memory operations. – François Andrieux Oct 27 '22 at 16:19
  • BTW: If you want to see how the allocator works, here is the first part of a series on glibc's allocator: https://azeria-labs.com/heap-exploitation-part-1-understanding-the-glibc-heap-implementation/ From the description you can see that the behavior shouldn't be linear. There are discreet steps at which certain optimizations stop applying. – Homer512 Oct 27 '22 at 16:22
  • @PeteBecker Great catch. Experiment corrected to remove looping through the entire array. It still shows increase when going from 1MB allocations to 4MB. – Fijoy Vadakkumpadan Oct 27 '22 at 16:33
  • @FrançoisAndrieux About the fragmentation, that's why I repeated the experiment twice. The smaller memory allocation takes less time in the second round of experiment also (even after many memory allocations and deallocations) – Fijoy Vadakkumpadan Oct 27 '22 at 16:35
  • 1
    With a 64 bit address space and virtual memory it can take a bit of work to successfully fragment the memory. – user4581301 Oct 27 '22 at 16:53

0 Answers0