5

The problem statement given is "You are working on an embedded device(an ATM) that only has 4 KB of free memory and you wish to sort the 2,000,000 transactions with-drawal history by the amount of money withdrawn (discarding the original order of transactions)."

For this problem statement , according to me we should use merge sort ,is there any issue with this sorting algorithm ?

Radha Gogia
  • 765
  • 1
  • 11
  • 22

5 Answers5

1

You are definitely looking for an algorithm which space complexity is much less than O(n), since 2 millions transactions are likely to take much more than 4 KB...

space complexity gives the amount of memory space needed to perform the sort, with respect to the input size, in the worst case. With that low free memory, you cannot afford to use an algorithm taking much space.

Merge sort is space O(n), so you better look for something else.

Something like O(log n) would be great, since the natural logarithm of 2 millions, for instance, is ~15.

Have a look at this table, which list

  • Quick sort
  • Bubble sort
  • Heap sort
  • Insertion sort
  • and Shell sort

as being at most space O(log n).

Déjà vu
  • 28,223
  • 6
  • 72
  • 100
1

If you want to apply any kind of recursive algorithm you ave to consider the amount of memory you need (stack) for the parameters and return addresses on the stack for each method call of the recursion. 2.000.000 means that each algorithm that uses a kind of divide an conquer approach will reach a recursion depth of about 21. That means even a clever implementation needs to to get along with 200 bytes (about 4000 /21) for memory the overhead of each recursion step.

It should be possible to implement nearly every in place sorting algorithm wit this restriction. E.g.:

  • quicksort,
  • heap sort,
  • insertion sort,
  • bubble sort (not recommended)

and others (also a variant of merge sort should be possible in place merge sort).

Community
  • 1
  • 1
MrSmith42
  • 9,961
  • 6
  • 38
  • 49
1

Two things , One Space Complexity and Time Complexity. Since your question specefically put constraints on space i would say it's better to approach the problem with best worst case space complexity. Those are,

  1. HeapSort
  2. SmoothSort
  3. BubbleSort
  4. InsertionSort
  5. SelectionSort

If performance is a concern in your application , in the above HeapSort and SmoothSort might give better performance.

MergeSort might not be applicable in this scenario due to it's space complexity

choppe
  • 493
  • 2
  • 11
0

Yes, there is an issue: merge sort has a linear space complexity. This means that it needs an amount of memory proportional to the number of entries you want to sort.

If your transactions are already in memory, then you may want an in situ (or in memory) algorithm like quicksort or heapsort. Otherwise, you have to use an algorithm that works directly on disk as suggested by @YoungHobbit.

SimoV8
  • 1,382
  • 1
  • 18
  • 32
  • I didn't know it was possible, I just read something on it. However, it seems to be only a theoretical result and it's seldom used in practice, so my answer should still be valid. Tell me if I'm wrong. – SimoV8 Jan 28 '16 at 09:27
  • Quicksort is better in almost all cases, the only advantage of merge sort over quicksort is the guaranteed asymptotic worst case O(n logn), it is rarely used in practice to begin with. Should you elect to do merge sort in place for constant space complexity, your time complexity will take a significant hit by a constant factor (still maintaing O(n logn) ) because of all the swaps you have to do. I am not arguing that in place merge sort is the ideal algorithm for this situation. My point is that in our case the space complexity is a property of IMPLEMENTATION, not that of the algorithm itself. – ForeverStudent Jan 29 '16 at 15:44
0

That is the problem with embedded systems. They have limited memory to work with.

My answer is 2 part.

1. Best sort in algorithmic perspective

Hands down there is no point in at least talking about using bubbble,insertion,selection sorts as they are not very efficient in memory wise and performance wise.

Following are some advanced sortings with their worst time and space complexities.

Quick sort, O(nn) ---- O(nlog(n))

Merge sort, O(n*log(n)) ---- O(n)

Tim sort, O(n*log(n)) ---- O(n)

Heap sort, O(n*log(n)) ---- O(1)

shell sort, O(n*log(n)^2) ---- O(1)

Bucket sort, O(n*n) ---- O(n)

Radix sort, O(nk) ---- O(n+k)

So since you need to save memory and speed up the processing time, I believe heap sort will be best alternative here as in worst cases also it operates under O(n*log(n)) and O(1) time and space complexities.

2. Performance wise

Since high performance is critical to this scenario you consider about code optimization,using EEPROM and expanding memory .

Kalpa Gunarathna
  • 1,047
  • 11
  • 17