I am doing numpy dot product on two matrices (Let us assume a and b are two matrices).
When the shape of a is (10000, 10000) and shape of b is (1, 10000) then the numpy.dot(a, b.T) is using all the CPU cores.
But when the shape of a is (10000, 10000) and shape of b is (2, 10000) then the numpy.dot(a, b.T) is not using all the CPU cores (Only using one).
This is happening when the row size of b is from 2 to 15 (i.e from (2, 10000) to (15, 10000)).
Example:
import numpy as np
a = np.random.rand(10**4, 10**4)
def dot(a, b_row_size):
b = np.random.rand(b_row_size, 10**4)
for i in range(10):
# dot operation
x = np.dot(a, b.T)
# Using all CPU cores
dot(a, 1)
# Using only one CPU core
dot(a, 2)
# Using only one CPU core
dot(a, 5)
# Using only one CPU core
dot(a, 15)
# Using all CPU cores
dot(a, 16)
# Using all CPU cores
dot(a, 50)
np.show_config()
openblas_lapack_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
lapack_opt_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
blas_mkl_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
blas_opt_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
blis_info:
NOT AVAILABLE
openblas_info:
define_macros = [('HAVE_CBLAS', None)]
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c