0

I have a 2D array of integers and I want to send its rows to each separate process. I assume that number of rows (M=5) is not evenly divisible by number of processes (size = 4), so in my case the process 0 will obtain additional row. Size of the 2D array A is MxN (5x10).

Here is my code

PROGRAM SCATTERV_MATRIX

INCLUDE 'mpif.h'

integer :: rank, size, ierr, dest, src, tag     !MPI variables
integer :: status(MPI_STATUS_SIZE)              !MPI variables

INTEGER, PARAMETER :: N = 10                    !number of columns
INTEGER, PARAMETER :: M = 5                     !number of rows
INTEGER,  ALLOCATABLE, DIMENSION(:,:) :: A      !MxN matrix A

INTEGER :: NEWTYPE, RESIZEDTYPE                 !MPI derived data types
INTEGER,  ALLOCATABLE, DIMENSION(:,:) :: LOCAL
INTEGER,  ALLOCATABLE :: SENDCOUNTS(:), DISPLS(:)
INTEGER :: RECVCOUNT, NRBUF
INTEGER :: MMIN, MEXTRA, INTSIZE, K, I, J
INTEGER :: START, EXTENT                        !(KIND=MPI_ADRESS_KIND)

CALL MPI_INIT(ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)

IF ( rank == 0 ) THEN                           !allocate and create 2Darray
     ALLOCATE( A (M, N) )
     K = 1
     DO I = 1, M
        DO J = 1, N
           A(I, J) = K
           K = K + 1
        END DO
     END DO
END IF

ALLOCATE( SENDCOUNTS(0:size-1), DISPLS(0:size-1) )

MMIN = M/size               !number of rows divided by number of processors
MEXTRA = MOD(M, size)       !extra rows

K = 0
DO I = 0, size-1
   IF (I < MEXTRA) THEN             !SENDCOUNTS=(/2,1,1,1/)
       SENDCOUNTS(I) = MMIN + 1
   ELSE
       SENDCOUNTS(I) = MMIN
   END IF
   DISPLS(I) = K                    !DISPLS=(/0,2,3,4/)
   K = K + SENDCOUNTS(I)
END DO

RECVCOUNT = SENDCOUNTS(rank)
ALLOCATE( LOCAL(RECVCOUNT,N) )

CALL MPI_TYPE_VECTOR(N, 1, M, MPI_INTEGER, NEWTYPE, ierr)
CALL MPI_TYPE_COMMIT(NEWTYPE, ierr)

START = 0
CALL MPI_TYPE_SIZE(MPI_INTEGER, INTSIZE, ierr)
EXTENT = 1*INTSIZE
CALL MPI_TYPE_CREATE_RESIZED(NEWTYPE, START, EXTENT, RESIZEDTYPE, ierr)
CALL MPI_TYPE_COMMIT(RESIZEDTYPE, ierr)

LOCAL(:, :) = 0

CALL MPI_SCATTERV(                        &
A, SENDCOUNTS, DISPLS, RESIZEDTYPE,       &        
LOCAL, RECVCOUNT*N, MPI_INTEGER,          &        
0, MPI_COMM_WORLD, ierr)

WRITE(*,*) rank, ':', LOCAL

CALL MPI_FINALIZE(ierr)

END PROGRAM SCATTERV_MATRIX

After sucessfull compilation I got "Program Exception - access violation" error. All my previous Fortan MPI programs worked fine. There must be some bug in the code, probably in MPI_SCATTERV. I was mainly following this answer. I will be gratefull for any suggestion. Thank you.

Moonwalk
  • 123
  • 3

1 Answers1

1

There's an error in your code:

INTEGER :: START, EXTENT                        !(KIND=MPI_ADRESS_KIND)

This line should be:

INTEGER(KIND=MPI_ADDRESS_KIND) :: START, EXTENT

In MPI, anything that is related to memory address, or similar concepts such as memory displacement, file size, file cursor etc., must not be normal integer. Some how you have this information in your comment and you also misspell MPI_ADDRESS_KIND.

Vladimir F correctly pointed out that you should 'USE MPI' instead of 'INCLUDE 'mpif.h''. This gives the compiler the opportunity to check the data types. For example, gfortran gives the following error message:

test.f90:59:71:

 CALL MPI_TYPE_CREATE_RESIZED(NEWTYPE, START, EXTENT, RESIZEDTYPE, ierr)
                                                                       1
Error: There is no specific subroutine for the generic 

‘mpi_type_create_resized’ at (1)
macelee
  • 321
  • 2
  • 7
  • You are absolutely right! Thank you. After fixing these two bugs, the program is working, but not working as expected. The problem is that the process 0 obtain 2D array – Moonwalk Jun 20 '17 at 11:57
  • The problem is that process 0 obtain 2D array in the following form: 1. row is `1 3 5 7 9 11 13 15 17 19` instead of `1 2 3 4 5 6 7 8 9 10`. 2. row is `2 4 6 8 10 12 14 16 18 20` instead of `11 12 13 14 15 16 17 18 19 20` – Moonwalk Jun 20 '17 at 12:02
  • That is the expected order in MPI. Read how MPI divides arrays. – Vladimir F Героям слава Jun 20 '17 at 15:50