The present question is related to another one I posted some month ago. As suggested by @Vladimir F, I'm posting now a new question (trying to simplify it as much as I can).
There are np == 4
processes and one of them (0
) is the master which has to build the following 2D array
-1 0 10 -1 0
0 -1 10 0 -1
-1 1 10 -1 1
1 -1 10 1 -1
-1 2 10 -1 2
2 -1 10 2 -1
-1 3 10 -1 3
3 -1 10 3 -1
The master can do
allocate(array(2*np,-2:2))
array = -1
array(:,0) = 10
and then receive the ranks of other processes in proper positions.
I can define the following types
call mpi_type_vector(2, 1, 2*np-1, mpi_integer, mytype1, err)
call mpi_type_vector(2, 1, 3, mytype1, mytype2, err)
and commit the second one
call mp_type_commit(mytype2, err)
which can be used by the root process to receive the data in the following positions,
P P
P P
In fact I already did this communication using point to point communications, mpi_send
and mpi_recv
(every process, 0
included, sends to 0
which, in turn, receives from every process, included itself), but now I want to do it with collective communications and derived types (maybe those I defined).
I think mpi_gather
is not enough, since it makes the root process receive data in contagious locations, as is not the case.
I'm actually trying with mpi_gatherv
, but the displacement has to be specified as an integer which means how many mytype2
elements apart from the receive address the data should be received.
EDIT
This answer analyze the topic (of sending/receiving portion of 2D arrays with collective MPI subroutines) so deeply and precisely that I found an answer to my question in it, which essentially is that an MPI derived type can be resized by means of MPI_Type_create_resized
.
On the other side that answer deals with "sub matrices", so the MPI_Type_create_subarray
subroutine can be used. This is not the case when I have to build a matrix with a more "problematic" patter, as I specified in the title. In fact, to accomplish the task I had to use MPI_Type_Vector
twice (as I wrote in my question) and then use MPI_Type_create_resized
to obtain a third type.