2

In the lab we have a piece of software that is used for MRI analysis, which involves a lot of data crunching. Is there a way to redistribute the load generated by the program across multiple computers/GPUs without editing the program itself?

Daniel Kislyuk
  • 956
  • 10
  • 11
  • 1
    It depends on the problem, but with volumetric data a straightforward solution might be to just run the analysis for each slice on one computer. – smocking Oct 25 '12 at 00:32
  • that's definitely one way, but still this is quite a manual solution. The question is, is there an automatic way to redistribute the workload or the program executed should be explicitly using several processing threads? – Daniel Kislyuk Oct 25 '12 at 00:41
  • Do you have the ability to edit the source at all? If so, something like [Cilk](http://en.wikipedia.org/wiki/Cilk) could let you achieve that without refactoring all your code. – William W Oct 25 '12 at 00:45
  • Thanks for pointing to Cilk, will try it out. – Daniel Kislyuk Oct 25 '12 at 01:09
  • Still the question remains - what is possible to do, when editing the source is altogether impossible? – Daniel Kislyuk Oct 25 '12 at 03:43
  • 1
    @DanielKislyuk, running it on different machines does not have to be manual -- you can write a short wrapper script to do the parallelization for you, e.g. in bash or Perl. I do it all the time for code that I can't be bothered to rewrite with multithreading: I just run each patient in one thread. – smocking Oct 25 '12 at 15:26
  • Peter Molfese wrote a great instruction on combining AFNI and xgrid here http://afni.nimh.nih.gov/afni/community/board/read.php?1,119535,119535#msg-119535 @smocking – Daniel Kislyuk Oct 26 '12 at 17:00
  • 1
    Does the application use any libraries? If so, you could look for parallel implementations of these libs. – Tim Child Jul 03 '13 at 14:49

0 Answers0