Currently, I am doing a project for my university, where I do implement known data structures (array, linked list, BST, etc.) and I have to measure the times some operations on them require. For example, the first one for me was array. I've measured the time for adding an element in the middle of an array (with moving further values ofc, so n = n + 1
. It of course gave me O(n)
, where n is a number of elements. Now I have to check for adding an element to the beginning of array and appending an element to it (adding to the end). I've got 2 simple algorithms (I cannot use STL
or boost::
):
Adding to the beginning:
void array::beginning_append(int value)
{
int *temp = new int[n + 1];
memcpy(temp + 1, table, sizeof(int) * n);
n = n + 1;
temp[0] = value;
delete[] table;
table = new int[n];
memcpy(table, temp, sizeof(int) * n);
delete[] temp;
}
Appending to the end:
void array::append(int value)
{
int *temp = new int[n + 1];
memcpy(temp, table, sizeof(int) * n);
temp[n] = value;
n = n + 1;
delete[] table;
table = new int[n];
memcpy(table, temp, sizeof(int) * n);
delete[] temp;
}
Those are methods or array
class, where table
and n
are members of this class. Now those do not differ that much, I thought their times will be the same, and they are (I checked with QueryPerformanceCounter
for big numbers of elements, like 500000
, and it gave me O(n)
complexity), but my lecturer said that adding to beginning will have O(n)
computational complexity, but adding or removing an element from the end will have complexity of O(1)
, so it will be const, no matter of the number of elements. So I would like to ask you guys if my algorithms are bad, I mean they do unnecessary stuff, and that's why they rely on number of elements, or my lecturer was just wrong