0

I am writing output Fortran data in binary format of an NxMxL matrix as follows

open(94, file = 'mean_flow_sp.dat', status = 'replace', action = 'write', form = 'unformatted')
  do k = 0,L-1
    do j = 0,M-1
      do i = 0,N-1
        write(94) u(i,j,k), v(i,j,k), w(i,j,k)
      enddo
    enddo
  enddo
close(94)

where u, v, w are single precision values allocated as e.g. u(0:N-1,0:M-1,0:L-1). Then I read the output file in Python as follows

f = open('mean_flow_sp.dat', 'rb')
data = np.fromfile(file=f, dtype=np.single).reshape(N,M,L)
f.close()

The first odd thing I notice is that the output Fortran file is 10,066,329,600 bytes long (this is using L = 640, M = 512, N = 1536). So the question is why this file is not 1536*512*640*3(variables)*4(bytes) = 6,039,797,760 bytes long?

Obviously, the Python script throws me an error when trying to reshape the read data as is not of the size of NxLxM x3 (in single precision).

Why is the output file so big?

b-fg
  • 3,959
  • 2
  • 28
  • 44
  • 1
    Your compiler is probably adding header/footer data to each record written, which you are not accounting for. You could either search for other questions corresponding to your setup or look at using stream output. – francescalus May 01 '17 at 01:19
  • Thanks, I realized that a bit later and posted the answer. – b-fg May 01 '17 at 01:32

1 Answers1

0

Ok, so I just realized that, as posted here, "Fortran compilers typically write the length of the record at the beginning and end of the record.", so then the size of the output file checks out.

Community
  • 1
  • 1
b-fg
  • 3,959
  • 2
  • 28
  • 44