I was reading about Morris inorder Traversal (geekforgeeks, other SO question).
It seems that the advantage of Morris Traversal as opposed to recursion or iteration with a stack, is that the space complexity is O(1)
instead of O(h)
with h
the height of the tree.
But when I am looking at the algorithm, it looks like you could have added up to h
new pointers. Consider this spindly tree as a simple example:
1
/
2
/
3
/
4
By the time you reach node 4, every node except the root would have a new pointer to its parent as a right child, which is O(h)
new pointers.
Is the reason that in most programming languages, the null
pointer takes the same space as a non-null pointer so changing the right
pointer doesn't take any new space?
If it's the reason, I find it strange to have a language-agnostic algorithm relying on implementation details for its complexity. Looks like a convenient trick more than anything.
For example the space complexity would be wrong in JavaScript where changing {left : node2}
to {left: node2, right: node1}
would be taking more space.