I have access to The Human Connectome Project data where I have more than 1000 .nii
files. In order to analyze them I need to load them as a numpy array which takes up a lot of memory. For example, consider the following codes:
import nibabel as nib
epi_image = nib.load('rfMRI_REST1_LR.nii')
epi_image.shape
out: (91, 109, 91, 1200)
epi_data = epi_image.get_data()
Last line gives a 4d tensor where the last axis is time. Since epi_data
is a numpy format, we can use it to train a neural network by converting it to a tensor but to do so we need to load the total data which is 5Gb
and this is just one of 1000. However, if I can break this 1200 time samples into 1200 .nii
I would be able to load the ones that are of interest.
Is there a way to extract 1200 .nii
out of the original file.