7

I have two for loops running in my Matlab code. The inner loop is parallelized using Matlabpool in 12 processors (which is maximum Matlab allows in a single machine).

I dont have Distributed computing license. Please help me how to do it using Octave or Scilab. I just want to parallelize 'for' loop ONLY.

There are some broken links given while I searched for it in google.

Wesley Bland
  • 8,816
  • 3
  • 44
  • 59
han17
  • 71
  • 1
  • 1
  • 3
  • When I used Octave a couple of years ago, parallel processing did not work. But I could easily start new octave processes from Python multiprocessing code. At least on Linux octave has a much lower startup overhead than Matlab. – hpaulj Jul 28 '14 at 03:06

4 Answers4

12

parfor is not really implemented in octave yet. The keyword is accepted, but is a mere synonym of for (http://octave.1599824.n4.nabble.com/Parfor-td4630575.html).

The pararrayfun and parcellfun functions of the parallel package are handy on multicore machines. They are often a good replacement to a parfor loop.

For examples, see http://wiki.octave.org/Parallel_package. To install, issue (just once)

pkg install -forge parallel

And then, once on each session

pkg load parallel

before using the functions

ederag
  • 2,409
  • 25
  • 49
2

In Scilab you can use parallel_run:

function a=g(arg1)
  a=arg1*arg1
endfunction

res=parallel_run(1:10, g);

Limitations

  • uses only one core on Windows platforms.
  • For now, parallel_run only handles arguments and results of scalar matrices of real values and the types argument is not used
  • one should not rely on side effects such as modifying variables from outer scope : only the data stored into the result variables will be copied back into the calling environment.
  • macros called by parallel_run are not allowed to use the JVM
  • no stack resizing (via gstacksize() or via stacksize()) should take place during a call to parallel_run
spoorcc
  • 2,907
  • 2
  • 21
  • 29
0

In GNU Octave you can use the parfor construct:

parfor i=1:10
    # do stuff that may run in parallel
endparfor

For more info: help parfor

juliohm
  • 3,691
  • 2
  • 18
  • 22
  • Is the actually a parallel processing mechanism in Octave, or does it just recognize the `parfor` command - for compatibility with MATLAB. In `3.4` Octave would object to `parfor` on syntax grounds. – hpaulj Jul 28 '14 at 02:59
  • 4
    @hpaulj As of octave 3.8.1 the parfor keyword is just recognized, without any actual parallelization. More details in my answer – ederag Sep 19 '14 at 10:56
0
  1. To see a list of Free and Open Source alternatives to MATLAB-SIMULINK please check its Alternativeto page or my answer here. Specifically for SIMULINK alternatives see this post.

  2. something you should consider is the difference between vectorized, parallel, concurrent, asynchronous and multithreaded computing. Without going much into the details vectorized programing is a way to avoid ugly for-loops. For example map function and list comprehension on Python is vectorised computation. It is the way you write the code not necesarily how it is being handled by the computer. Parallel computation, mostly used for GPU computing (data paralleism), is when you run massive amount of arithmetic on big arrays, using GPU computational units. There is also task parallelism which mostly refers to ruing a task on multiple threads, each processed by a separate CPU core. Concurrent or asynchronous is when you have just one computational unit, but it does multiple jobs at the same time, without blocking the processor unconditionally. Basically like a mom cooking and cleaning and taking care of its kid at the same time but doing only one job at the time :)

  3. Given the above description there are lot in the FOSS world for each one of these. For Scilab specifically check this page. There is MPI interface for distributed computation (multithreading/parallelism on multiple computers). OpenCL interfaces for GPU/data-parallel computation. OpenMP interface for multithreading/task-parallelism. The feval functions is not parallelism but a way to vectorize a conventional function.Scilab matrix arithmetic and parallel_run are vectorized or parallel depending to the platform, hardware and version of the Scilab.

Foad S. Farimani
  • 12,396
  • 15
  • 78
  • 193
  • 1
    Foad, you are wrong: in Scilab versions which support it (namely 5.5.2) and fully implement it in a stable way (namely, Linux) `parallel_run` does **true** parallel computations (not just vectorization) based on child processes (not threads). – Stéphane Mottelet Apr 03 '19 at 12:04
  • @StéphaneMottelet thanks for the correction. I removed that part. Please help me understand how it works. is `parallel_run` using OpenMP or MPI for parallelization? Are my above definitions correct? – Foad S. Farimani Apr 03 '19 at 12:23
  • 1
    Scilab 5.5.2 `parallel_run` uses OpenMP. – Stéphane Mottelet Apr 03 '19 at 13:58
  • @StéphaneMottelet so is this only for Scilab 5.5.2 and only on Linux? any plans to port to other platforms (Windows and macOS) and also the latest version? – Foad S. Farimani Apr 03 '19 at 14:02
  • 1
    Your sentence "matrix arithmetic on Scilab are all vectorized computation" is not completely exact. In OSX (Accelerate framework or Intel MKL in 6.0.2) and Windows (MKL), the level 3 BLAS uses all available cores of the processor, hence we can say that matrix arithmetic on Scilab is parallel **or** vectorized depending on the platform. – Stéphane Mottelet Apr 03 '19 at 14:05
  • @StéphaneMottelet added. feel free to edit any of my post in the case there are any flaws. other points: 1. if it is MKL then it is also hardware dependent? any plane to replace to replace MKL with FLOSS and hardware agnostic ones? 2. What is the plan for OpenCL or GPGPU in general? – Foad S. Farimani Apr 03 '19 at 14:19