I am writing output Fortran data in binary format of an NxMxL matrix as follows
open(94, file = 'mean_flow_sp.dat', status = 'replace', action = 'write', form = 'unformatted')
do k = 0,L-1
do j = 0,M-1
do i = 0,N-1
write(94) u(i,j,k), v(i,j,k), w(i,j,k)
enddo
enddo
enddo
close(94)
where u, v, w
are single precision values allocated as e.g. u(0:N-1,0:M-1,0:L-1)
. Then I read the output file in Python as follows
f = open('mean_flow_sp.dat', 'rb')
data = np.fromfile(file=f, dtype=np.single).reshape(N,M,L)
f.close()
The first odd thing I notice is that the output Fortran file is 10,066,329,600 bytes long (this is using L = 640, M = 512, N = 1536). So the question is why this file is not 1536*512*640*3(variables)*4(bytes) = 6,039,797,760 bytes long?
Obviously, the Python script throws me an error when trying to reshape the read data as is not of the size of NxLxM x3 (in single precision).
Why is the output file so big?