Here is my code.py
.
import numpy as np
import gc
def main():
var_1, var_2, var_3 = np.random.normal(0, 1, (1, 3))[0]
var_4, var_5, var_6 = np.random.normal(0, 1, (1, 3))[0]
var_7, var_8, var_9 = np.random.normal(0, 1, (1, 3))[0]
var_10, var_11, var_12 = np.random.normal(0, 1, (1, 3))[0]
List = [var_1, var_2, var_3, var_4, var_5, var_6, var_7, var_8, var_9, var_10, var_11, var_12]
with open('record.csv','a') as f:
for i in List:
f.write('{},'.format(str(i)))
f.write('\n')
del var_1, var_2, var_3, var_4, var_5, var_6, var_7, var_8, var_9, var_10, var_11, var_12
del f, List
gc.collect()
# This code is just for demonstration. In actual
# situation, `data` is necessary for main(). So don't use `del data`.
data = np.random.normal(0, 1, (1000, 3))
total = 100*100*100
for k in range(total):
print(k+1, total)
main()
Theoretically, the code above should only use a fixed number of memory since I've deleted all variables and cleared all garbage. However, when I ran it by python code.py
in one terminal and observed memory usage via htop
in another terminal, the memory usage continuously increases from 1.79G/7.76G
to 1.80G/7.76G
, then to 1.81G/7.76G
and so on utill the for-loop is over.
How can I modify the code to make it keep running without continuously consuming more memory?