1

Is there a way to force MPI to always block on send? This might be useful when looking for deadlocks in a distributed algorithm which otherwise depends on the buffering MPI might choose to do on send.

For example, the following program (run with 2 processes) works without problems on my machine:

// C++
#include <iostream>
#include <thread>

// Boost
#include <boost/mpi.hpp>
namespace mpi = boost::mpi;

int main() {
    using namespace std::chrono_literals;

    mpi::environment env;
    mpi::communicator world;
    auto me = world.rank();
    auto other = 1 - me;

    char buffer[10] = {0};

    while (true) {
        world.send(other, 0, buffer);
        world.recv(other, 0, buffer);
        std::cout << "Node " << me << " received" << std::endl;

        std::this_thread::sleep_for(200ms);
    }
}

But if I change the size of the buffer to 10000 it blocks indefinitely.

Community
  • 1
  • 1
Călin
  • 347
  • 2
  • 13
  • 3
    I guess that what you want is `MPI_Ssend()` but according to [boost::mpi's documentation](http://www.boost.org/doc/libs/1_60_0/doc/html/mpi/tutorial.html#mpi.c_mapping), it isn't supported. Maybe there is another way, but I don't know boost::mpi enough to help you. – Gilles Jan 10 '16 at 14:22
  • That's right. Post it as an answer, please. – Călin Jan 10 '16 at 14:23

3 Answers3

2

For pure MPI codes, what you describe is exactly what MPI_Ssend() gives you. However, here, you are not using pure MPI, you are using boost::mpi. And unfortunately, according to boost::mpi's documentation, MPI_Ssend() isn't supported.

That said, maybe boost::mpi offers another way, but I doubt it.

Gilles
  • 9,269
  • 4
  • 34
  • 53
1

If you want blocking behavior, use MPI_Ssend. It will block until a matching receive has been posted, without buffering the request. The amount of buffering provided by MPI_Send is (intentionally) implementation specific. The behavior you get for a buffer of 10000 may differ when trying a different implementation.

I don't know if you can actually tweak the buffering configuration, and I wouldn't try because it would not be portable. Instead, I'd try to use the MPI_Ssend variant in some debug configuration, and use the default MPI_Send when best performance are needed.

(disclaimer: I'm not familiar with boost's implementation, but MPI is a standard. Also, I saw Gilles comment after posting this answer...)

Eran
  • 21,632
  • 6
  • 56
  • 89
1

You can consider tuning the eager limit value (http://blogs.cisco.com/performance/what-is-an-mpi-eager-limit) to force that send operation to block on any message size. The way to establish the eager limit, depends on the MPI implementation. On Intel MPI you can use the I_MPI_EAGER_THRESHOLD environment variable (see https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Communication_Fabrics_Control.htm), for instance.

John_West
  • 2,239
  • 4
  • 24
  • 44
Harald
  • 3,110
  • 1
  • 24
  • 35
  • The "Intel" link does not work for me. Made an edit. – John_West Jan 10 '16 at 14:32
  • Sorry, the correct link is https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Communication_Fabrics_Control.htm – Harald Jan 10 '16 at 14:49