For this, I'll use the time complexities from the Python site: https://wiki.python.org/moin/TimeComplexity
Using append
def get_priorities(words):
priorities = []
for idx, word in enumerate(words):
...
priorities.append(word, priority)
return set(priorities)
This saves you the cost pre-allocating the array of size which takes O(nk)
time 1 * len(words)
in this case, but you substitute that for the cost of appending which according to the Python documents is O(1)
on average, which should give a time complexity of O(n)
where n
is the length of the words for your for
loop.
On the other hand using a yield
to save memory / avoid re-reading while maintaining the same O(n)
complexity (What does the "yield" keyword do in Python?):
def get_priorities(words):
for idx, word in enumerate(words):
...
yield (word, priority)
I would argue for the second approach because you don't need to allocate memory for the priorities list or risk the cost of an append. However, since you're using set
, I take it that there are cases of duplicates that you're trying to eliminate? Using the set
then would add an additional n
to your running time so O(2n)
in the first case and O(n)
using the yield, though O(2n)
is essentially n
running time. Regardless, the cost of allocating priorities
in the first case is O(1)
if you allocate it as an empty list.