1

I have a large N by N matrix containing real numbers, which has been decomposed into blocks using MPI. I am now trying to recompose this matrix and write it in a single file.

This topic (writing a matrix into a single txt file with mpi) covered a similar issue, but I got pretty confused by all the 'integer-to-string' conversion, etc (I am not an expert!). I am using Fortran for my code, but I guess that even a C explanation should help. I have been reading tutorials on MPI-IO, but there are still a few things I do not understand. Here is the code I have been working on:

use mpi 
implicit none

! matrix dimensions
integer, parameter :: imax = 200
integer, parameter :: jmax = 100

! domain decomposition in each direction
integer, parameter :: iprocs = 3
integer, parameter :: jprocs = 3

! variables
integer :: i, j
integer, dimension(mpi_status_size) :: wstatus
integer :: ierr, proc_num, numprocs, fileno, localarray
integer :: loc_i, loc_j, ppp
integer :: istart, iend, jstart, jend
real, dimension(:,:), allocatable :: x

! initialize MPI
call mpi_init(ierr)
call mpi_comm_size(mpi_comm_world, numprocs, ierr)
call mpi_comm_rank(mpi_comm_world, proc_num, ierr)

! define the beginning and end of blocks
loc_j = proc_num/iprocs
loc_i = proc_num-loc_j*iprocs
ppp    = (imax+iprocs-1)/iprocs
istart = loc_i*ppp + 1
iend   = min((loc_i+1)*ppp, imax)
ppp    = (jmax+jprocs-1)/jprocs
jstart = loc_j*ppp + 1
jend   = min((loc_j+1)*ppp, jmax)

! write random data in each block
allocate(x(istart:iend,jstart:jend))
do j = jstart, jend
  do i = istart, iend
    x(i,j) = real(i + j)
  enddo
enddo

! create subarrays
call mpi_type_create_subarray( 2, [imax,jmax], [iend-istart+1,jend-jstart+1], &
                               [istart,jstart], mpi_order_fortran, mpi_real, localarray, ierr )
call mpi_type_commit( localarray, ierr )

! write to file
call mpi_file_open( mpi_comm_world, 'test.dat', IOR(MPI_mode_create,MPI_mode_wronly), &
                  mpi_info_null, fileno, ierr )
call mpi_file_set_view( fileno, 0, mpi_real, localarray, "native", mpi_info_null, ierr )
call mpi_file_write_all( fileno, x, (jend-jstart+1)*(iend-istart+1), MPI_real, wstatus, ierr )
call mpi_file_close( fileno, ierr )

! deallocate data
deallocate(x)

! finalize MPI
call mpi_finalize(ierr)    

I have been following this tutorial (PDF), but my compiler complains that there is no specific subroutine for the generic mpi_file_set_view. Did I do something wrong? Is the rest of the code ok?

Thank you very much for your help!!

Joachim

Community
  • 1
  • 1
Touloudou
  • 2,079
  • 1
  • 17
  • 28
  • 2
    Which implementation of MPI are you using? Is it possible that it was compiled without MPI I/O support enabled (that's an option in both MPICH and Open MPI if you compile it yourself). – Wesley Bland Jan 09 '15 at 21:35
  • it's interesting that the compiler is complainign about mpi_file_set_view, though. if mpi-io support was somehow not enabled, it would complain about MPI_FILE_OPEN, MPI_FILE_CLOSE, and MPI_FILE_WRITE_ALL too, right? – Rob Latham Jan 11 '15 at 01:38

1 Answers1

0

I would say that the easy way is to use a library designed to perform such operations efficiently : http://2decomp.org/mpiio.html

You can also look at their source code (files io.f90 and io_write_one.f90).

In the source code, you will see a call to MPI_FILE_SET_SIZE that may be relevant for your case.

EDIT : consider using "call MPI_File_Set_View(fhandle, 0_MPI_OFFSET_KIND,...". Answer from MPI-IO: MPI_File_Set_View vs. MPI_File_Seek

Community
  • 1
  • 1
user1824346
  • 575
  • 1
  • 6
  • 17
  • 1
    Thank you very much for your answers everyone! You were right, there was a problem because offset was an integer instead of an integer(kind=mpi_offset_kind)! I have modified a few things and the code is working just fine now. – Touloudou Jan 14 '15 at 20:02