It was a recent interview question. Please design a data structure with insertion, deletion, get random in o(1) time complexity, the data structure can be a basic data structures such as arrays, can be a modification of basic data structures, and can be a combination of basic data structures.
-
Some thoughts on what might be the solution? :) Where are you having problems? – Jakub Kotowski Feb 28 '14 at 00:17
-
@jkbkot I failed to give the correct answer during the interview. I tried to combine a hash table and a linked list. According to the comments from the interviewer, it is not a good enough answer. However, thinking about combining different data structures is in the right direction. – feliciafay Feb 28 '14 at 00:23
-
1(good) interviewers are usually more interested in the way you think about the problem than about the exact solution. For this particular problem you could for example ask what is the maximum number of elements. If there is some, then the answer would be an array. If there's none, you start thinking about more complex solutions. If there's no limit, it might not have a solution in O(1), you might go for good average or amortized complexity e.g. by doubling the the size of the array, etc. – Jakub Kotowski Feb 28 '14 at 00:39
-
2Possible duplicate of [Data structure: insert, remove, contains, get random element, all at O(1)](http://stackoverflow.com/questions/5682218/data-structure-insert-remove-contains-get-random-element-all-at-o1) – craftsmannadeem Apr 19 '16 at 15:56
2 Answers
Combine an array with a hash-map of element to array index.
Insertion can be done by appending to the array and adding to the hash-map.
Deletion can be done by first looking up and removing the array index in the hash-map, then swapping the last element with that element in the array, updating the previously last element's index appropriately, and decreasing the array size by one (removing the last element).
Get random can be done by returning a random index from the array.
All operations take O(1).
Well, in reality, it's amortised (from resizing the array) expected (from expected hash collisions) O(1), but close enough.

- 54,589
- 14
- 104
- 138
-
What about collisions? And I'm not sure you can just neglect the complexity of increasing and decreasing an array size. – Jakub Kotowski Feb 28 '14 at 00:50
-
That's the most elegant solution! I wish I had answered it that way. Thank you very much:) – feliciafay Feb 28 '14 at 00:51
-
1@jkbkot Sure, it's not truly O(1) (it's amortised expected O(1)), but this is most likely the answer they were looking for in anyway. – Bernhard Barker Feb 28 '14 at 01:17
A radix tree would work. See http://en.wikipedia.org/wiki/Radix_tree. Insertion and deletion are O(k) where k is the maximum length of the keys. If all the keys are the same length (e.g., all pointers), then k is a constant so the running time is O(1).
In order to implement get random, maintain a record of the total number of leaves in each subtree (O(k)). The total number of leaves in tree is recorded at the root. To pick one at random, generate a random integer to represent the index of the element to pick. Recursively scan down the tree, always following the branch that contains the element you picked. You always know which branch to choose because you know how many leaves can be reached from each subtree. The height of the tree is no more than k, so this is O(k), or O(1) when k is constant.

- 862
- 5
- 13
-
1This might be more suited to a real time system where then accepted answer (which relies on amortisation) could be problematic. – Callum Feb 28 '14 at 00:56