0

I'm trying to write a simple multiprocess program to found a value in an array.

#include <mpi.h>
#include <stdio.h>

int* create_array(int num_items) {
    int* tmp = new int[num_items];
    for(int i = 0; i < num_items; i++)
        tmp[i] = i;

    return tmp;
}

int main() {

    int num_items = 1000;
    int item = 999;

    MPI_Init(NULL, NULL);
    int world_rank, world_size, num_items_per_proc;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);

    MPI_Request* inReq;

    int* array;
    if(world_rank == 0) {
        array = create_array(num_items);
        num_items_per_proc = (num_items / world_size) + 1;
    }

    int* sub_array = new int[num_items_per_proc];
    MPI_Scatter(array, num_items_per_proc, MPI_INT, sub_array,
                num_items_per_proc, MPI_INT, 0, MPI_COMM_WORLD);

    bool found = false;
    MPI_Irecv(&found, 1, MPI::BOOL, MPI_ANY_SOURCE, MPI_ANY_TAG,
              MPI_COMM_WORLD, inReq);

    for(int i = 0; i < num_items_per_proc && !found; i++) {
        if (sub_array[i] == item) {
            found = true;
            printf("Elemento %d trovato in posizione: %d\n", item, i);
            for(int j = 0; j < world_size; j++)
                if(j != world_rank)
                    MPI_Send(&found, 1, MPI::BOOL, j, j, MPI_COMM_WORLD);
        }
    }

    if(world_rank == 0) delete[] array;
    delete[] sub_array;

    MPI_Barrier(MPI_COMM_WORLD);
    MPI_Finalize();

    return 0;
}

I'm trying to stop all for when one of them found the value in a portion of array, but I got a segmentation fault form Irecv. How I can solve this ?

  • `num_items_per_proc` looks incorrect. A task that founds the value does not stop the others, it simply notify them, and you can achieve that with `MPI_Allreduce()` – Gilles Gouaillardet May 27 '18 at 12:21

1 Answers1

1

The reason your code doesn't work is that you must supply an actual MPI_Request to MPI_Irecv - not just an uninitialized pointer!

MPI_Request inReq;
MPI_Irecv(&found, 1, MPI_CXX_BOOL, MPI_ANY_SOURCE, MPI_ANY_TAG,
          MPI_COMM_WORLD, &inReq);

The way you handle found is wrong. You must not modify a variable given to an asynchronous request and you cannot just assume it is updated in the background. Non-blocking messages are not one-sided remote memory operations. So you have to call test and if the status indicates a received message you can abort the loop. Make sure that each request is completed, also in the rank who found the result.

Further, num_items_per_proc must be valid on all ranks (for allocating the memory, and for specifying the recvcount in MPI_Scatter.

The barrier before MPI_Finalize is redundant and finally, the C++ bindings of MPI were removed, use MPI_CXX_BOOL instead of MPI::BOOL.

You can find more sophisticated approaches to your problem in the answers of this question.

Zulan
  • 21,896
  • 6
  • 49
  • 109