14

I'm writing a software for hyperbolic partial differential equations in c++. Almost all notations are vector and matrix ones. On top of that, I need the linear algebra solver. And yes, the vector's and matrix's sizes can vary considerably (from say 1000 to sizes that can be solved only by distributed memory computing, eg. clusters or similar architecture). If I had lived in utopia, I'd had had linear solver which scales great for clusters, GPUs and multicores.

When thinking about the data structure that should represent the variables, I came accros the boost.ublas and MTL4. Both libraries are blas level 3 compatible, MTL4 implements sparse solver and is much faster than ublas. They both don't have implemented support for multicore processors, not to mention parallelization for distributed memory computations. On the other hand, the development of MTL4 depends on sole effort of 2 developers (at least as I understood), and I'm sure there is a reason that the ublas is in the boost library. Furthermore, intel's mkl library includes the example for binding their structure with ublas. I'd like to bind my data and software to the data structure that will be rock solid, developed and maintained for long period of time.

Finally, the question. What is your experience with the use of ublas and/or mtl4, and what would you recommend?

thanx, mightydodol

dodol
  • 1,073
  • 2
  • 16
  • 33
  • @mightydodol: you are welcome. I added a link to a paper that I was looking for the whole day. You might find it interesting. Also corrected a factual error regarding ScaLAPACK. – stephan Jul 01 '09 at 17:09
  • thanx for pointing out to eigen. @stephan Yeah, the paper was really interesting. Almost the same problem as mine. – dodol Jul 02 '09 at 09:19
  • Posted as a comment, since it doesn't fulfill your requirements. I am an happy user of Armadillo (http://arma.sourceforge.net/ ), which makes linear algebra operations a piece of cake. It interfaces with BLAS and LAPACK (so you have speed), has clean syntax, and is the only linear algebra library which is still maintained actively (apart from `boost::ublas` which I find terribly difficult to use). It does not have support (yet) for sparse matrices so don't use it if you really need them. – Alexandre C. Mar 11 '11 at 18:44
  • @Alexandre Thx, I wonder how armadillo compares with eigen. At first glance, it seems that they are very similar. – dodol Mar 31 '11 at 07:30

7 Answers7

11

With your requirements, I would probably go for BOOST::uBLAS. Indeed, a good deployment of uBLAS should be roughly on par with MTL4 regarding speed.

The reason is that there exist bindings for ATLAS (hence shared-memory parallelization that you can efficiently optimize for your computer), and also vendor-tuned implementations like the Intel Math Kernel Library or HP MLIB.

With these bindings, uBLAS with a well-tuned ATLAS / BLAS library doing the math should be fast enough. If you link against a given BLAS / ATLAS, you should be roughly on par with MTL4 linked against the same BLAS / ATLAS using the compiler flag -DMTL_HAS_BLAS, and most likely faster than the MTL4 without BLAS according to their own observation (example see here, where GotoBLAS outperforms MTL4).

To sum up, speed should not be your decisive factor as long as you are willing to use some BLAS library. Usability and support is more important. You have to decide, whether MTL or uBLAS is better suited for you. I tend towards uBLAS given that it is part of BOOST, and MTL4 currently only supports BLAS selectively. You might also find this slightly dated comparison of scientific C++ packages interesting.

One big BUT: for your requirements (extremely big matrices), I would probably skip the "syntactic sugar" uBLAS or MTL, and call the "metal" C interface of BLAS / LAPACK directly. But that's just me... Another advantage is that it should be easier than to switch to ScaLAPACK (distributed memory LAPACK, have never used it) for bigger problems. Just to be clear: for house-hold problems, I would not suggest calling a BLAS library directly.

stephan
  • 10,104
  • 1
  • 51
  • 64
8

If you're programming vectors, matrices, and linear algebra in C++, I'd look at Eigen:

http://eigen.tuxfamily.org/

It's faster than uBLAS (not sure about MTL4) and much cleaner syntax.

Geoff Hutchison
  • 434
  • 3
  • 10
  • Eigen is the only other matrix library besides ATLAS / LAPACK that we use. It is substantially faster for matrix multiplication for smaller matrices (below 100 rows) and comparable in performance to a standard ATLAS / non-vendor-tuned BLAS for larger matrices (provided the processor supports SSE instructions). It is however substantially slower than ATLAS / LAPACK for advanced linear algebra (e.g. LU decomposition) for larger matrices and doesn't support multi-core processors. – stephan Jul 02 '09 at 10:17
  • @quant_dev: I believe that's correct, although I suspect the developers would be willing to help with that. I am not one of the developers. – Geoff Hutchison Oct 29 '09 at 14:54
  • @stephan: I think the development versions of Eigen are working on ATLAS and other "backends" for appropriate features. – Geoff Hutchison Oct 29 '09 at 14:55
  • 1
    @stephan: Latest version of Eigen can link to Intel MKL library, which supports multi-core. – zhanxw Nov 29 '12 at 14:26
  • @zhanxw: thanks for updating my comment. The integration of MKL (which I mentioned in http://stackoverflow.com/questions/2222549/best-c-matrix-library-for-sparse-unitary-matrices/2222983#2222983, but forgot to add here) indeed alleviates most of the performance issues that one might have had in the past in certain use cases. – stephan Nov 30 '12 at 06:33
5

For new projects, it's probably best to stay away from Boost's uBlas. The uBlas FAQ even has this warning since late 2012:

Q: Should I use uBLAS for new projects? ... the last major improvement of uBLAS was in 2008 and no significant change was committed since 2009. ... Performance? There are faster alternatives. Cutting edge? uBLAS is more than 10 years old and missed all new stuff from C++11.

user2678378
  • 141
  • 2
  • 2
  • since it's 2013, this is valuable information. Something completely irrelevant to subject of this post: i think that the word "alternative" includes one or all possibilities, as of things, propositions, or courses of action (so there is no need for plural form) i.e. alternative libraries are faster. – dodol Aug 30 '13 at 08:22
2

There is one C++ library missing in this list: FLENS

http://flens.sf.net

Disclaimer: Yes, this is my baby

  • It is header only
  • Comes with a simple, non-performant, generic (i.e. templated) C++ reference implemenation of BLAS.
  • If available you can use an optimized BLAS implementation as backend. In this case its like using BLAS directly (some Benchmark I should update).
  • You can use overloaded operators instead of calling BLAS functions.
  • It comes with its own, stand-alone, generic re-implemenation of a bunch of LAPACK functions. We call this port FLENS-LAPACK.
  • FLENS-LAPACK has exactly the same accuracy and performance as Netlib's LAPACK. And in my experience (FLENS-)LAPACK+ATLAS or (FLENS-)LAPACK+OpenBLAS gives you the same performance as ACML or MKL.
  • FLENS has a different policy regarding the creation of temporary vector/matrices in the evaluation of linear algebra expressions. The FLENS policy is: Never create them!!!. However, in a special debug-mode we allow the creation of temporaries "when necessary". This "when necessary" policy thing is the default in other libraries like Eigen or Armadillo or in Matlab.
Michael Lehn
  • 2,934
  • 2
  • 17
  • 19
  • Your website is clean, but the documentation is hard... e.g. how to do eigen decomposition on a symmetric matrix? A short reference sheet (e.g. Eigen) will attract more users, IMHO. – zhanxw Nov 29 '12 at 14:31
  • An overview of linear algebra functions can be found under [FLENS-LAPACK](http://apfel.mathematik.uni-ulm.de/~lehn/FLENS/flens/lapack/lapack.html). However, you are right, the documentation here is not complete. It is focused on frequently needed functionality as well as an functions that I already ported to FLENS/C++. But most/many of the other LAPACK routines can be used if a native LAPACK implementation is available. ... I will try to spend some extra time on writing and extend the documentation further. – Michael Lehn Jan 03 '13 at 00:18
1

You can see the performance differences directly here: http://www.osl.iu.edu/research/mtl/mtl4/doc/performance.php3

Both are reasonable libraries to use in terms of their interfaces, I don't think that because uBLAS got through the BOOST review process it's necessarily way more robust. I've had my share of nightmares with unobvious side effects and unintended consequences from uBLAS implementations.

That's not to say uBLAS is bad, it's really good, but I think given the dramatic performances differences for MTL these days, it's worth using it instead of uBLAS even though it's arguably a bit more risky becuase of it's "only 2 developer" support group.

At the end of the day, it's about speed with a matrix library, go with MTL4.

Chris Harris
  • 4,705
  • 3
  • 24
  • 22
1

From my own experience, MTL4 is much faster than uBLAS and it is also faster than Eigen.

Chris Frederick
  • 5,482
  • 3
  • 36
  • 44
Tarek
  • 1,060
  • 4
  • 17
  • 38
0

There is a parallel version of MTL4. Just have a look at simunova