My first StackOverflow message after 6 years of using great experience from this site. Thank you all for all the great help you have offered to me and to others.
This problem, however, baffles me completely and I would like to ask for assistance from the community.
I have a 175,759,360 bytes large raw byte 'F' ordered file that I need to read into the memory to analyze using python/numpy. The goal is to obtain a (613, 640, 224) numpy array of np.uint16 dtypes.
I have already succeeded multiple times in similar situations but in the last project I have encountered a problem, which I am unable to resolve.
I was able to succeed before in two ways (np.memmap)
np.memmap(rawFileName, dtype=np.uint16, mode='r', offset=0, shape=(613, 640, 224), order='F')
or using normal np.fromfile
with open('filename.raw', "rb") as rawFile:
rawFile = np.fromfile(rawFile, dtype=np.uint16)
rawFile = rawFile.reshape((613, 640, 224), order="F")
The solution using np.memmap now returns WinError8 error
OSError: [WinError 8] Not enough memory resources are available to process this command
and the solution with fromfile returns reference count error
*** Reference count error detected:
an attempt was made to deallocate 4 (H) ***
I would like to ask for any help I can receive in resolving this matter. The amount of memory should not be a problem since I have tried the code on both 8GB and 32GB memory machines. I am using Win 10 with Python version 3.6.4 and numpy version 1.14.2
Thank you in advance.