4

It seems numpy.dot is not equals to blas's gemv/gemm, here is the experiment:

>>> import numpy
>>> numpy.show_config()
lapack_opt_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/anaconda/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
blas_opt_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/anaconda/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
openblas_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/anaconda/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
openblas_lapack_info:
    libraries = ['openblas']
    library_dirs = ['/usr/local/anaconda/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
blas_mkl_info:
  NOT AVAILABLE
>>> A=numpy.random.randn(100,50)
>>> x=numpy.random.randn(50)
>>> from scipy import linalg
>>> gemv = linalg.get_blas_funcs("gemv")
>>> numpy.all(gemv(1,A,x)==numpy.dot(A,x))
False
>>> gemm = linalg.get_blas_funcs("gemm")
>>> numpy.all(gemm(1,A,x)==numpy.dot(A,x))
False

I don't know why, could any one show me how to construction a BLAS based function equals to numpy.dot

citihome
  • 41
  • 2
  • `numpy.allclose(gemv(1, A, x), numpy.dot(A, x))` yields `True` on my system. In contrast, `numpy.allclose(gemm(1, A, x), numpy.dot(A, x))` doesn't, but that's a dimensional problem: `numpy.allclose(numpy.squeeze(gemm(1, A, x)), numpy.dot(A, x))` is also `True`. –  Dec 26 '15 at 01:41
  • it is right if using numpy.allclose proposition, but wrong with numpy.all. if numpy.dot is implemented by underlied BLAS gemv/gemm(even dot), the return value should be equal. but the testcase shows the proposition is negative. @Evert – citihome Dec 26 '15 at 02:40
  • Your question asks if they're *equal* (they are, see my first comment), but you comment asks if `dot` is *implemented* using gemv or gemm; those questions are not the same. For an answer to the later question, you'd simply have to look at the source code. –  Dec 26 '15 at 04:10
  • Perhaps more interestingly: why do you want to know? As shown, precision is not the problem. Just beware of premature optimisation versus readability –  Dec 26 '15 at 04:12
  • I tried implement SGD based algorithm for multiclassification(logistic regression), while the iteration can be formulated as $$theta_t -= learning_rate*\nabla L(theta; (x_t, y_t))$$, we attach learning rate with 1(there we can explain it with probability meaning). the experiment shows that numpy based implement had a nice performance on accuracy, but the speed is very slow. when we use BLAS, the accuracy had degraded a lot. the main difference lies in numpy.dot and BLAS gemm/gemv, that's why I prefer numpy.dot == BLAS gemm/gemv – citihome Dec 26 '15 at 04:37
  • This should answer the question: https://stackoverflow.com/a/19839985/1121352 – gaborous Jul 04 '17 at 18:08

0 Answers0