2

I'm trying to recombine sub-arrays without the dark-grey rows with MPI_Gatherv. Picture's worth a thousand words:

graphical overview of the ghost/halo dark-grey cells http://img535.imageshack.us/img535/9118/ghostcells.jpg

How would you send only parts of *sendbuf (the first parameter in MPI_Gatherv manual) to the root process (without a wasteful rewriting in another structure, this time without the dark-grey rows)? The *displacements (the 4th parameter) is only relevant to the *recvbuf of the root process.

Thank you.

Update (or, being more precise)

I wanted to also send the "boundary" (light-grey) cells ... not just the "interior" (white) cells. As osgx correctly pointed out: in this case the MPI_Gatherv suffices ... some conditional array indexing will do it.

Blaz
  • 3,548
  • 5
  • 26
  • 40
  • There is also an recvcounts array if you want to send different amount of data from each process. Good examples are here (in middle and below) http://www.mpi-forum.org/docs/mpi-11-html/node70.html – osgx Jul 06 '11 at 12:48
  • Do you need to send only "Interior cells" or "Interior cells and Boundary cells"? – osgx Jul 06 '11 at 12:49
  • @osgx "Boundary cells" included. That is, every white _and_ light-grey cell (or, inversely, everything _not_ dark-grey). – Blaz Jul 06 '11 at 13:41
  • 1
    I tryed to code this in my answer using a datatype of array row (array line). Using of datatype is not necessary here, you can do the same using plain Gatherv with `recvcounts` and `displacements`. Non-root processes should just point to beginning of data to send; not to the start of ARRAY, – osgx Jul 06 '11 at 13:45
  • 1
    @osgx That is indeed correct. :) I've thought that **all** processes should call the _same_ `MPI_Gatherv` ... anyway, got it fixed now with an `if` and `else if` statement; MPI takes care of the rest (it seems). Thank you osgx. – Blaz Jul 06 '11 at 14:46

1 Answers1

2

What about constructing a datatype, which will allow you to send only white (Interior) cells?

The combined (derived) datatype can be a MPI_Type_indexed.

The only problem will be with very first line and very last line in processes P0 and PN, because P1 and PN should send more data then P2....PN-1

For Interior+Boundary you can construct datatype of single "line" with

MPI_Datatype LineType;
MPI_Type_vector (1, row_number, 0 , MPI_DOUBLE, &LineType)
MPI_Type_commit ( &LineType);

Then (for ARRAY sized [I][J] splitted for stripecount stripes )

for (i=0; i< processes_number; ++i) { 
    displs[i] = i*(I/stripecount)+1; // point to second line in each stripe
    rcounts[i] = (I/stripecount) -2 ;
}
rcounts[0] ++; // first and last processes must send one line more
rcounts[processes_number-1] ++;
displs[0] -= 1; // first process should send first line of stripe
// lastprocess displacement is ok, because it should send last line  of stripe

source_ptr = ARRAY[displs[rank]];

lines_to_send = rcounts[rank];

MPI_Gatherv( source_ptr, lines_to_send, LineType, recv_buf, rcounts, displs, LineType, root, comm); 
osgx
  • 90,338
  • 53
  • 357
  • 513
  • the code is done in assumption that `rank` is zero-based number of process; the array have equal sizes at each process. – osgx Jul 06 '11 at 13:06