Shared Memory and Distributed Memory are two distinct paradigms in parallel computing which often means different thinking strategies. Some parallel programming frameworks, like UPC or MPI, can be emulated to run on either shared or distributed machines although its better not to do so since , e.g. here, UPC is meant to be used on shared memory and MPI is meant to be used distributed memory machines. I'm not sure about OpenMP.
In either case, my advice is to fist think about how you could get parallelism in your code on a distributed architecture and then go with MPI. If you happen to be in the computational science business, there are already very well written packages, such as PETSc
, from Argonne National Lab, and Trilinos
, from Sandia National Lab, that may help you develop much faster.