After calculating the hash key for any immutable object, say a tuple of int and string elements, does the python interpreter keep this value in memory, or does it calculate it again each time. If I have code that relies on repeatedly checking an object's belonging to some collection, such as a set, do I have to take care of caching these keys myself, or will the interpreter do it for me?
x = ("a", 1)
assert x in {("a", 1), ("b", 2)} # first time hash(x) is calculated
assert x in {("c", 3), ("d", 4)} # will python interpreter calculate hash(x) again?
Or let's rephrase that question. Python hash method built into it's native tuple type has O(n) time-complexity, where n is the number of elements in that tuple. If we create a code that calls that method m times, it will have O(n*m) time-complexity. Now the question is, whether does python optimize this case internaly, by caching the value of the hash, so in practice it's reduced back to O(n)?
n = 999_999_999 # big number
x = tuple(i for i in range(n)) # very big tuple, takes lot of time to calculate it's hash
m = 999_999_999 # another big numer
for _ in range(m): # lots of iterations
hash(x)