1

I have a matrix (c) of 10x10 (M = 10) elements in which I divide the matrix by rows to be executed by 5 different processes (slaves = 5) with each process corresponding to 2 rows of that matrix.

offset = 0;
rows = (M / slaves);
MPI_Send(&c[offset][0], rows*M, MPI_DOUBLE, id_slave,0,MPI_COMM_WORLD);
offset= offset+rows;

Now I want to divide the matrix but by columns. I did the test as follows by changing array indices but not working:

MPI_Send(&c[0][offset], rows*M, MPI_DOUBLE, id_slave,0,MPI_COMM_WORLD);

Do you know how to do it? Thank you.

Nico Rossello
  • 65
  • 1
  • 1
  • 6
  • 1
    [This answer](http://stackoverflow.com/a/10788351/463827) concerns using Gather rather than Send, but the idea is the same - you need to create an mpi type which describes the data layout you need here - in particular, using a vector or a subarray would work. – Jonathan Dursi Jan 05 '17 at 19:56

2 Answers2

3

You are using the wrong datatype. As noted by Jonathan Dursi, you need to create a strided datatype that tells MPI how to access the memory in such a way that it matches the data layout of a column or a set of consecutive columns.

In your case, instead of

MPI_Send(&c[0][offset], rows*M, MPI_DOUBLE, id_slave, 0, MPI_COMM_WORLD);

you have to do:

MPI_Datatype dt_columns;
MPI_Type_vector(M, rows, M, MPI_DOUBLE, &dt_columns);
MPI_Type_commit(&dt_columns);
MPI_Send(&c[0][offset], 1, dt_columns, id_slave, 0, MPI_COMM_WORLD);

MPI_Type_vector(M, rows, M, MPI_DOUBLE, &dt_columns) creates a new MPI datatype that consists of M blocks of rows elements of MPI_DOUBLE each with the heads of the consecutive blocks M elements apart (stride M). Something like this:

|<------------ stride = M ------------->|
|<---- rows --->|                       |
+---+---+---+---+---+---+---+---+---+---+--
| x | x | x | x |   |   |   |   |   |   | ^
+---+---+---+---+---+---+---+---+---+---+ |
| x | x | x | x |   |   |   |   |   |   | |
+---+---+---+---+---+---+---+---+---+---+  
.   .   .   .   .   .   .   .   .   .   . M blocks
+---+---+---+---+---+---+---+---+---+---+  
| x | x | x | x |   |   |   |   |   |   | |
+---+---+---+---+---+---+---+---+---+---+ |
| x | x | x | x |   |   |   |   |   |   | v
+---+---+---+---+---+---+---+---+---+---+--

>> ------ C stores such arrays row-wise ------ >>

If you set rows equal to 1, then you create a type that corresponds to a single column. This type cannot be used to send multiple columns though, e.g., two columns, as MPI will look for the second one there, where the first one ends, which is at the bottom of the matrix. You have to tell MPI to pretend that a column is just one element wide, i.e. resize the datatype. This can be done using MPI_Type_create_resized:

MPI_Datatype dt_temp, dt_column;
MPI_Type_vector(M, 1, M, MPI_DOUBLE, &dt_temp);
MPI_Type_create_resized(dt_temp, 0, sizeof(double), &dt_column);
MPI_Type_commit(&dt_column);

You can use this type to send as many columns as you like:

// Send one column
MPI_Send(&c[0][offset], 1, dt_column, id_slave, 0, MPI_COMM_WORLD);
// Send five columns
MPI_Send(&c[0][offset], 5, dt_column, id_slave, 0, MPI_COMM_WORLD);

You can also use dt_column in MPI_Scatter[v] and/or MPI_Gather[v] to scatter and/or gather entire columns.

Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186
  • Thank you. Could you tell me the structure of MPI_Recv to manipulate that data? – Nico Rossello Jan 09 '17 at 16:43
  • That would depend on what kind of data structure you are receiving in. If you are filling just some columns of a larger matrix, then you need a strided datatype similar to `dt_column`. If you are filling the columns of a matrix that has the same width as the block you are sending, then a simple linear `MPI_Recv(&c[0][0], width*M, MPI_DOUBLE, ...)` would do. – Hristo Iliev Jan 09 '17 at 18:03
  • @HristoIliev:Hi and thanks for the answer(upv).I wanted to ask you.If I had an array (nxn) stored in column major format and wanted to split its rows into equal parts, how should I proceed?I mean, I am creating the data types in order to have the stride.But then, by using scatterv and gatherv I am not receiving the right results. – George Feb 04 '19 at 11:45
  • @George, resizing the vector type as shown in my answer is usually all you have to do to have it working with scatterv/gatherv. See the comment from Jonathan Dursi under the question itself. It contains links to his answers with lots of code samples. – Hristo Iliev Feb 04 '19 at 13:21
  • @HristoIliev:Thanks for the answer.I have already saw some implementations.I can't understand why it doesn't work. If you want to check a running small code. https://pastebin.com/7Tb9xxse .(if not, no problem). For example, A[3] element is 1 and B[3] should be 3 – George Feb 05 '19 at 13:24
  • @Georgy, I can't debug your code now, but a quick scan through it shows that you are using `&A` and `&B` in the MPI calls. That's wrong since both `A` and `B` are pointers and you are actually passing their memory addresses and not the addresses they are pointing to. – Hristo Iliev Feb 05 '19 at 15:50
1

The problem with your code is the following:

your c array is continuous in memory, and in C it stored row-major order, and the dividing it by row like you do will just add constant offset from the beginning.

and the way you are going to divide it by columns just gives you wrong offset. You can imagine it for 3x3 matrix and 3 slave processes:

a[3][3] = {{a00 a01 a02},
           {a10 a11 a12},
           {a20 a21 a22}}

which is actually in memory looks like:

A = {a00,a01,a02,a10,a11,a12,a20,a21,a22}

For example we want to send data to CPU with id = 1. In this case a[1][0] will point you to the forth element of A and the a[0][1] will point you to the second element of A. And the in both cases you just send the rows*M elements from the specific point in A.

In first case it will be:

a10,a11,a12

And in second case:

a01,a02,a10

One of the way to solve things you want is to transpose your matrix and the send it.

And also it is much natural to use MPI_Scatter than MPI_Send for this problem, something like it explained here: scatter

Community
  • 1
  • 1
iskakoff
  • 98
  • 1
  • 4