0

I am running a python script on a new laptop. The script, in a simple way, opens a file .fits in a loop, plots part of the file, select points from the plot, saves an output in a .dat file.

import numpy as np
from matplotlib import pyplot as plt
from astropy.io import fits,ascii
from glob import glob
import gc

list_files=np.sort(glob('*.fits')) ### around 90 objects 
list_ranges=ascii.read('ranges.dat')#### aroung 1000 objects
for i in range(len(list_files):
    output.open(list_files[i]+'.dat','w')
    with fits.open(list_files[i]) as single_file:
        x=single_file[0].data[0]
        y=single_file[0].data[1]
    for j in range(len(list_ranges)):
        x_single=x[((x<list_ranges[j]+3) & (x>list_ranges[j]-3))]
        y_single=y[((x<list_ranges[j]+3) & (x>list_ranges[j]-3))]
        fig, ax = plt.subplots(figsize=(18,8))
        ax.plot(x,y)
        pts = np.asarray(plt.ginput(2, timeout=-1))
        output.write('%.2f %.2f\n'%(pts[0,0],pts[1,0]))
        plt.close()
        del x_single,y_single,pts
        gc.collect()
    output.close()
    del single_file,x,y
    gc.collect()

Now, this kind of script worked perfectly before on other devices, but now after 3-4 loops in the first one, everything crushes, sometimes the script is just killed, sometimes the terminal closes itself. I inserted os.system('free -h') in the script to check the memory, and it begins with:

               total        used        free      shared  buff/cache   available
Mem:            15Gi       2.6Gi       8.5Gi       754Mi       4.2Gi        11Gi
Swap:          2.0Gi          0B       2.0Gi

After the third object, the situation is this one:

               total        used        free      shared  buff/cache   available
Mem:            15Gi       5.5Gi       175Mi       7.8Gi       9.7Gi       1.7Gi
Swap:          2.0Gi       3.0Mi       2.0Gi

Last, I pushed until it crashed and checked with dmegs and this is the response:

[ 5783.416916] oom-kill:constraint=CONSTRAINT_NONE,nodemask=
(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice
/user@1000.service/app.slice/app-org.gnome.Terminal.slice/vte-spawn-94017a15-
e67f-4443-87c5-a39220aa3d9c.scope,task=python3,pid=9500,uid=1000
[ 5783.416977] Out of memory: Killed process 9500 (python3) total-vm:9479428kB, anon-
rss:4419828kB, file-rss:0kB, shmem-rss:2580kB, UID:1000 pgtables:14068kB oom_score_adj:0

Thanks in advance.

  • Looks like you're not calling `single_file.close()`. I would think the `del` might do that automatically, but it's possible it doesn't. – Dan Getz Nov 02 '22 at 12:48
  • Are you able to post the actual input, so that you have a Minimal Reproducible Example? – Tim Boddy Nov 03 '22 at 15:43
  • I changed the way I open and close single_file, but nothing changed. Unfortunately I cannot post the input, they are still private data. – Erasmo Trentin Nov 04 '22 at 07:59
  • It seems that the memory problem is connected with the ax.plot(x,y) you can try to ad plt.clf() before the plt.close() stament and should do the trick. (Without test data is dificult to check it my self) – kithuto Dec 16 '22 at 12:21

0 Answers0