1

I'm new to Slurm. Below, I want to execute a Python file, which requires 92.3GiB. I assigned 120GB but my code still returns memory error.

submit_venv.sh

#/bin/bash

#SBATCH --account=melchua
#SBATCH --mem=120GB
#SBATCH --time=2`:00:00

module load python/3.8.2
python3 1.methylation_data_processing.py

Run script using ./submit_venv.sh

Traceback:

  File "1.methylation_data_processing.py", line 49, in <module>
    meth_clin = pd.concat([gene_symbol, meth_clin])  # add gene_symbol to dataframe
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/util/_decorators.py", line 311, in wrapper
    return func(*args, **kwargs)
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 307, in concat
    return op.get_result()
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 532, in get_result
    new_data = concatenate_managers(
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 222, in concatenate_managers
    values = _concatenate_join_units(join_units, concat_axis, copy=copy)
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 486, in _concatenate_join_units
    to_concat = [
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 487, in <listcomp>
    ju.get_reindexed_values(empty_dtype=empty_dtype, upcasted_na=upcasted_na)
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 441, in get_reindexed_values
    missing_arr = np.empty(self.shape, dtype=empty_dtype)
numpy.core._exceptions.MemoryError: Unable to allocate 92.3 GiB for an array with shape (111331, 111332) and data type object
melolilili
  • 239
  • 1
  • 8

2 Answers2

0

Assuming that your slurm.conf file correctly lists RAM as a consumable resource (for example, SelectTypeParameters=CR_CPU_Memory), the issue probably is not Slurm related, and most likely has to do with your OS not wanting to allocate that much memory to a single task. There was a similar question here: Unable to allocate array with shape and data type .

0

Sbatch's man page for mem suggests your --mem=120GB should be --mem=120G, without the B. But to grant the job access to all memory on the job's nodes, try --mem=0