3

I have access to The Human Connectome Project data where I have more than 1000 .nii files. In order to analyze them I need to load them as a numpy array which takes up a lot of memory. For example, consider the following codes:

import nibabel as nib
epi_image = nib.load('rfMRI_REST1_LR.nii')
epi_image.shape
out: (91, 109, 91, 1200)
epi_data = epi_image.get_data()

Last line gives a 4d tensor where the last axis is time. Since epi_data is a numpy format, we can use it to train a neural network by converting it to a tensor but to do so we need to load the total data which is 5Gb and this is just one of 1000. However, if I can break this 1200 time samples into 1200 .nii I would be able to load the ones that are of interest. Is there a way to extract 1200 .nii out of the original file.

Saeed
  • 598
  • 10
  • 19

1 Answers1

1

The way to extract 1200 .nii out of the original file is to extract each frame separately:

for idx in range(1200):
 epi_data = epi_image.get_data()[:,:,:,idx]
 nimg = nib.Nifti1Image(epi_data, affine=epi_image.affine, header=epi_image.header)
 nimg.to_filename("file%idx.nii"%+int(idx))
Bilal
  • 3,191
  • 4
  • 21
  • 49