2

This question: Time complexity of memory allocation says that memory allocation takes non-deterministic time. Based on this, how is it valid to say, for example, that "adding an item to a dynamic array takes amortized O(1) time"? In this scenario, if adding the item causes the dynamic array to resize, then a new backing buffer must be allocated. For all we know, allocating that buffer could take O(1) time, or O(n^2) time, or some crazily large expression that dwarfs n, depending on the algorithm the memory allocator uses and its current state.

Do academic papers simply ignore the time complexity of memory allocation / assume that it is O(1) in calculating time complexity?

James Ko
  • 32,215
  • 30
  • 128
  • 239

1 Answers1

2

For asymptotic (big O) analysis of algorithms that use dynamic memory, it's generally safe to assume that the amortized cost of an allocation is proportional to the size of the allocation.

That is conservative, in that there are real memory allocation schemes, like simple copying garbage collection, that can guarantee that time. At the same time, it would usually do no good to assume a faster allocation scheme, since the cost of actually using the memory you allocate is at least proportional to the size of the allocation as well.

Real allocators often also impose a linear cost to zero out memory or map pages or whatever, so even when it might be convenient to assume a faster allocator, it should be avoided as unrealistic.

Matt Timmermans
  • 53,709
  • 3
  • 46
  • 87