-1

I have a dynamic array that I am constantly appending items onto. An append is complexity O(1). When the array becomes full, I would like to grow the array and copy it over, which is complexity O(n).

Now, suppose I am growing the array at different rates when it becomes full. These rates are:

i) Some constant C

ii) n/2

iii) n^2

What is the amortized runtime in each of these scenarios?

I believe that I was able to solve case i. The amortized runtime will be the total cost of operations divided by the total number of operations. In this case, the total cost is C * O(1) + 1 * O(n), and the total number of operations is C. Thus, the amortized runtime is O(n).

However, I'm a little lost when analyzing the two remaining cases. It seems to me that the total number of operations will be n/2 + 1 and n^2 + 1, respectively, but I don't quite know how to calculate the total cost of operations.

Can anyone lead me on the right path?

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
T.Torline
  • 23
  • 5

1 Answers1

0

You can use a similar analysis to the first case.

ii.
(n/2 * O(1) + O(n)) / (n/2) = O(1) + O(n)/n = O(1)
iii.
(n^2 * O(1) + O(n)) / (n^2) = O(1) + O(n)/n^2 = O(1)

This answer gives a more detailed explanation as to why a dynamic array that resizes in proportion to n (assuming it's resizing to n power of 1 or greater) has a constant amortized cost.

dominicm00
  • 1,490
  • 10
  • 22
  • Great! The link you provided definitely did help me to gain a better understanding of amortized runtime. It's interesting that increasing an array by n/2, n, and n^2 all result in O(1) amortized runtime. I see many people recommending that it's best to increase by 1.5n, so is this because such a growth rate is typically better for saving space in memory> – T.Torline Feb 12 '19 at 03:18
  • @T.Torline Although any growth rate proportional to `n` power of one or greater is O(1), big O notation doesn't take into account that the _actual_ constant value varies. `n/2` averages 3 operations, `2n` averages 1.5, and `n^2` averages 1. However, even though the average cost goes down the larger the growth rate, the unnecessary space used goes up. Plus, in a real-world scenario, you don't want the resizing (even if rare) to take forever. So the growth rate used usually depends on the use case and balancing compute/memory. – dominicm00 Feb 12 '19 at 03:30
  • Ah, understood! It seems to be the case of "real-world vs. theory"! It certainly makes sense that we wouldn't want resizing to take forever (or take up a bunch of space in memory), which is why n^2 is not a commonly used growth rate. – T.Torline Feb 12 '19 at 14:16