0

The present question is related to another one I posted some month ago. As suggested by @Vladimir F, I'm posting now a new question (trying to simplify it as much as I can).

There are np == 4 processes and one of them (0) is the master which has to build the following 2D array

-1  0  10 -1  0
 0 -1  10  0 -1
-1  1  10 -1  1
 1 -1  10  1 -1
-1  2  10 -1  2
 2 -1  10  2 -1
-1  3  10 -1  3
 3 -1  10  3 -1

The master can do

allocate(array(2*np,-2:2))
array = -1
array(:,0) = 10

and then receive the ranks of other processes in proper positions.

I can define the following types

call mpi_type_vector(2, 1, 2*np-1, mpi_integer, mytype1, err)
call mpi_type_vector(2, 1, 3,      mytype1,     mytype2, err)

and commit the second one

call mp_type_commit(mytype2, err)

which can be used by the root process to receive the data in the following positions,

    P         P
 P         P   

In fact I already did this communication using point to point communications, mpi_send and mpi_recv (every process, 0 included, sends to 0 which, in turn, receives from every process, included itself), but now I want to do it with collective communications and derived types (maybe those I defined).

I think mpi_gather is not enough, since it makes the root process receive data in contagious locations, as is not the case.

I'm actually trying with mpi_gatherv, but the displacement has to be specified as an integer which means how many mytype2 elements apart from the receive address the data should be received.

EDIT

This answer analyze the topic (of sending/receiving portion of 2D arrays with collective MPI subroutines) so deeply and precisely that I found an answer to my question in it, which essentially is that an MPI derived type can be resized by means of MPI_Type_create_resized.

On the other side that answer deals with "sub matrices", so the MPI_Type_create_subarray subroutine can be used. This is not the case when I have to build a matrix with a more "problematic" patter, as I specified in the title. In fact, to accomplish the task I had to use MPI_Type_Vector twice (as I wrote in my question) and then use MPI_Type_create_resized to obtain a third type.

Community
  • 1
  • 1
Enlico
  • 23,259
  • 6
  • 48
  • 102
  • Can you be more precise about your question? I see a few thoughts where you could be seeking confirmation of something, but I don't want to guess. – francescalus Jul 28 '16 at 16:37
  • Well, the first process should to receive 4 scalars from each process (itself included). These 4 scalars must be arranged in the 2D array in the way you can see in the question. Which collective MPI communication subroutine should I use, and how? I think the MPI derived type I defined is the best to perform this communication with one call, but if it's not the case, suggest me a more efficient way to do it. – Enlico Jul 28 '16 at 17:50
  • 1
    I found a related question [here](http://stackoverflow.com/q/17508647/5825294). – Enlico Jul 29 '16 at 07:32
  • 1
    If I understand correctly, you have a working solution based on resizing a datatype that is constructed from two vectors and you're asking if that's the most elegant method? For datatypes made up of the same basic type, the most general datatype constructors are MPI_Type_indexed and MPI_Type_create_indexed_block where you simply list the displacements of all the entries verbatim (you can use the "block" version as all your block sizes are equal to 1). This might be simpler than using two vectors, at least in the simple test case you presented here, but could get unwieldy for large problems. – David Henty Jul 29 '16 at 09:53
  • From the documentation of the subroutines you mentioned, I guess that your answer could be of interest, but question in still closed. – Enlico Jul 29 '16 at 10:05

0 Answers0