1

I read here that it is important to "make sure that numpy uses optimized version of BLAS/LAPACK libraries on your system."

When I input:

import numpy as np
np.__config__.show()

I get the following results:

blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/home/anaconda3/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/home/anaconda3/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/home/anaconda3/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/home/anaconda3/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]

Does this mean my version of numpy is using optimized BLAS/LAPACK libraries, and if not, how can I set numpy so that it does use the optimized version?

ManUtdBloke
  • 423
  • 5
  • 14

2 Answers2

2

Kind of. OpenBLAS is quite alright. I just took the first link, that I could find on google looking for "OpenBLAS, ATLAS, MKL comparison".

http://markus-beuckelmann.de/blog/boosting-numpy-blas.html

Now, this is not the whole story. The differences might not be / be slightly / be a lot different depending on the algorithms, which you need. There is really not much that can be done than to run your own code linked against the different implementations.

My favourites in average across all sorts of linear algebraic problems, SVDs, Eigs, real and pseudo inversions, factorisations ... single core / multicore on the different OSes:

MacOS: Accelerated framework (comes along with the OS) Linux/Windows:

  1. MKL
  2. with great distance but still quiet alright: ATLAS and OpenBLAS on par
  3. ACML has been always a disappointment to me even on AMD processors

TLDR: Your setup is fine. But if you want to squeeze the last drop of blood out of your CPU / RAM / Mainboard combination you need MKL. It comes of course with quite a price tag, but if you can get hardware half as expensive in return, maybe worth it. And if you write an open source package, you may use MKL free of charge for development purposes.

Kaveh Vahedipour
  • 3,412
  • 1
  • 14
  • 22
  • 1
    Quite a price tag, indeed: MKL is [free](https://software.seek.intel.com/performance-libraries), for personal *and* commercial use. So is [IntelPython](https://software.intel.com/en-us/articles/end-user-license-agreement), which is an Anaconda build with Intel tweaks, including MKL. It's also possible to install Intel packages from PyPI (see for instance [Intel-numpy](https://pypi.org/project/intel-numpy/)). Now, I can't guarantee you can find hardware half as expensive... –  Apr 01 '19 at 16:37
0

To track what libraries get loaded on MacOS,

export DYLD_PRINT_LIBRARIES  # see man dyld

and to see what libs xx.dylib or xx.so would load in turn,

otool -L xx.dylib

(sorry, don't know about other platforms).


A different question is, does it matter ? How different are MacOS Accelerate, Openblas, MKL ... ? Measuring runtimes with different user problems, libraries, compilers, multicore, memories ... is a tall order. Does anyone know of a wide-range testbench on the web, newer than benchmarking-python-vs-c-using-blas-and-numpy from 2014 ?

See also:
google openblas benchmark macos python: 31000 hits

numpy-site.cfg (used by pip -> setup.py) mentions several BLAS / LAPACK alternatives, which "haven't been benchmarked with NumPy or SciPy yet".

Numpy and scipy dropped support for Accelerate in 2018.

denis
  • 21,378
  • 10
  • 65
  • 88