I need to implement concurrent matrix multiplication in C using multiple processes. I understand that because each process has its own private address space, I will have to use some form of interprocess communication (IPC). I did some looking around and couldn't find many implementations that didn't use threads. I was wondering if anyone knew the most best way to go about this, either using shared memory, message passing, or pipes? I am not asking for a solution, but rather, if anyone knows, which of these methods will be more efficient with matrix multiplication. Or if there is a common standard way to do this with multiple processes?
Asked
Active
Viewed 2,141 times
1

Jonathan Leffler
- 730,956
- 141
- 904
- 1,278

Cory Gross
- 36,833
- 17
- 68
- 80
-
Shared Memory has the advantage of avoiding additional copy operations, whereas pipes will need copies. – Nobody moving away from SE Sep 24 '12 at 17:20
-
You could use MPI: http://en.wikipedia.org/wiki/Message_Passing_Interface – szx Sep 24 '12 at 17:34
-
Why are you avoiding threads? – David Grayson Sep 25 '12 at 02:32
-
I am not really avoiding them. I am writing two programs that do the same thing. One that forks child processes, the other uses pthreads, in order to see the difference in efficiency. – Cory Gross Sep 26 '12 at 01:06
2 Answers
1
Shared memory will be a good solution for this problem, I think. The processes can compute their solutions and share one memory to take the solutions together.
#include <sys/ipc.h>
#include <sys/shm.h>
int shmget(key_t key, int size, int shmflg);
Is one of the C functions for shared memory (you'll need shmat()
and maybe shmdt()
and shmctl()
too).
Also you have to care about synchronization so that the processes does not manipulate the computation from each other.
I would use semaphores for that: see Semaphores in C and Semaphore Wikipedia

Jonathan Leffler
- 730,956
- 141
- 904
- 1,278

Jan Koester
- 1,178
- 12
- 26
-
I would say that plain simple matrix multiplication doesn't suffer from race conditions – Evert Sep 24 '12 at 17:25
-
From what I can tell, it seems like I can do what I need to just by using fork() and join() without semaphores? – Cory Gross Sep 24 '12 at 22:31
1
The most efficient way of concurrently processing matrix multiplications would be shared memory. In this way, you don't have to serialize your matrix through a pipe/message and you can directly apply your multiplications on your shared memory space.

Evert
- 563
- 1
- 5
- 13