I am reading a big file in chunks like
> def gen_data(data):
> for i in range(0, len(data), chunk_sz):
> yield data[i: i + chunk_sz]
If I use length variable instead of len(data) , something like that
length_of_file = len(data)
def gen_data(data):
for i in range(0, length_of_file, chunk_sz):
yield data[i: i + chunk_sz]
What will be the performance improvements for big files. I tested for small one's but didn't see any change.
P.S I am from C/C++ background where calculating in each repetition in while or for loop is a bad practice because it executes for every call.