5

I am totally a novice in Multi-Core Programming, but I do know how to program C++.

Now, I am looking around for Multi-Core Programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB.

For anyone who have experienced with any of these 3 API (or any other API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

Karl
  • 5,613
  • 13
  • 73
  • 107
  • read boost::thread documentation and try some examples. This is probably most useful way to learn. look at Dr. dobbs website: http://www.drdobbs.com/cpp/184401518 – Anycorn May 24 '10 at 02:44

5 Answers5

9

As a starting point I'd suggest OpenMP. With this you can very simply do three basic types of parallelism: loops, sections, and tasks.

Parallel loops

These allow you to split loop iterations over multiple threads. For instance:

#pragma omp parallel for
for (int i=0; i<N; i++) {...}

If you were using two threads, then the first thread would perform the first half of the iteration. The second thread would perform the second half.

Sections

These allow you to statically partition the work over multiple threads. This is useful when there is obvious work that can be performed in parallel. However, it's not a very flexible approach.

#pragma omp parallel sections
{
  #pragma omp section
  {...}
  #pragma omp section
  {...}
}

Tasks

Tasks are the most flexible approach. These are created dynamically and their execution is performed asynchronously, either by the thread that created them, or by another thread.

#pragma omp task
{...}

Advantages

OpenMP has several things going for it.

  • Directive-based: the compiler does the work of creating and synchronizing the threads.

  • Incremental parallelism: you can focus on just the region of code that you need to parallelise.

  • One source base for serial and parallel code: The OpenMP directives are only recognized by the compiler when you run it with a flag (-fopenmp for gcc). So you can use the same source base to generate both serial and parallel code. This means you can turn off the flag to see if you get the same result from the serial version of the code or not. That way you can isolate parallelism errors from errors in the algorithm.

You can find the entire OpenMP spec at http://www.openmp.org/

8

Under the hood OpenMP is multi-threaded programming but at a higher level of abstraction than TBB and its ilk. The choice between the two, for parallel programming on a multi-core computer, is approximately the same as the choice between any higher and lower level software within the same domain: there is a trade off between expressivity and controllability.

Intel vs AMD is irrelevant I think.

And your choice ought to depend on what you are trying to achieve; for example, if you want to learn TBB then TBB is definitely the way to go. But if you want to parallelise an existing C++ program in easy steps, then OpenMP is probably a better first choice; TBB will still be around later for you to tackle. I'd probably steer clear of MPI at first unless I was certain that I would be transferring from shared-memory programming (which is mostly what you do on a multi-core) to distributed-memory programming (on clusters or networks). As ever , the technology you choose ought to depend on your requirements.

High Performance Mark
  • 77,191
  • 7
  • 105
  • 161
2

I'd suggest you to play with MapReduce for sometime. You can install several virtual machines instances on the same machine, each of which runs a Hadoop instance (Hadoop is a Yahoo! open source implementation of MapReduce). There are a lot of tutorials online for setting up Hadoop.

btw, MPI and OpenMP are not the same thing. OpenMP is for shared memory programming, which generally means, multi-core programming, not parallel programming on several machines.

Yin Zhu
  • 16,980
  • 13
  • 75
  • 117
  • ...wait, let me get this right. When we say the term "multi-core programming", we refer to utilizing computer that has more than 1 core, right? Oops, sorry, my bad, my bad, I make a definition mistake in my question. – Karl May 23 '10 at 11:03
  • Sorry, I am referring to utilizing the multi-core. – Karl May 23 '10 at 11:05
  • you don't know c++. then what do you know? – Yin Zhu May 23 '10 at 11:20
  • 1
    @unknownthreat: There are two types of parallel programming: "multi-core" (usually when people talk about parallel programming this is what they mean) and distributed. The latter means you run the program on multiple *machines*, not just multiple CPU cores on a single machine. @Yin Zhu: The OP was referring to multi-core programming, as the new title suggests, not distributed. – Sasha Chedygov May 24 '10 at 06:52
2

Depends on your focus. If you are mainly interested in multi threaded programming go with TBB. If you are more interested in process level concurrency then MPI is the way to go.

stonemetal
  • 6,111
  • 23
  • 25
1

Another interesting library is OpenCL. It basically allows you to use all your hardware (CPU, GPU, DSP, ...) in the best way.

It has some interesting features, like the possibility to create hundreds of threads without performance penalties.

Pietro
  • 12,086
  • 26
  • 100
  • 193