0

i have a python script which handles files with sensor datas from a md4 file and plots them in an image. Therefore i am using the libs asammdf and matplotlib - they have a dependency to numpy.

On my windows main computer where i developed the script everything works fine - all threads are used and the speed of the script is fine. windows cpu view with many threads But on other devices where the script should also run to handle more files the script only uses one core with multiple threads while running on Ubuntu 16.04 with own compiled python 3.7.0:

linux htop only one core with many threads

I searched a lot and tried everything what is sugessted - some of them are a bit older for ubuntu 12.. :
https://shahhj.wordpress.com/2013/10/27/numpy-and-blas-no-problemo/
Importing scipy breaks multiprocessing support in Python
Why does multiprocessing use only a single core after I import numpy?

The linux machines are new installed - only python3.7, pip3 and the python libs are installed. I even downloaded the newest ubuntu 19.10 image where python3.7.5 is preinstalled. What i have done:

  • tried os.sched_setaffinity
  • tried os.system("taskset -p 0xff %d" % os.getpid())

pid 20534's current affinity mask: ff
pid 20534's new affinity mask: ff

  • installed the ATLAS and set the alternatives but those does not seems to get taken..

on my windows machine i have: python 3.7.1 and the output for >>> import numpy; numpy.show_config()

blas_mkl_info:   NOT AVAILABLE blis_info:   NOT AVAILABLE openblas_info:
    library_dirs = ['C:\\projects\\numpy-wheels\\numpy\\build\\openblas']
    libraries = ['openblas']
    language = f77
    define_macros = [('HAVE_CBLAS', None)] blas_opt_info:
    library_dirs = ['C:\\projects\\numpy-wheels\\numpy\\build\\openblas']
    libraries = ['openblas']
    language = f77
    define_macros = [('HAVE_CBLAS', None)] lapack_mkl_info:   NOT AVAILABLE openblas_lapack_info:
    library_dirs = ['C:\\projects\\numpy-wheels\\numpy\\build\\openblas']
    libraries = ['openblas']
    language = f77
    define_macros = [('HAVE_CBLAS', None)] lapack_opt_info:
    library_dirs = ['C:\\projects\\numpy-wheels\\numpy\\build\\openblas']
    libraries = ['openblas']
    language = f77
    define_macros = [('HAVE_CBLAS', None)]

on my linux machines: python 3.7.0 and the numpy config:

blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    language = c
    define_macros = [('HAVE_CBLAS', None)]

I do not have any multi threading logic in my code or something similar - all the multi threading logic is handled inside the libs i use. I hope someone can help. I think its very strange i still have this kind of problems so many years later of the questions above i linked.

EDIT: cause question got deleted: i checked in bios that all cores and hyperthreading is activated

white91wolf
  • 400
  • 4
  • 18

1 Answers1

0

asammdf is completely single threaded. On the windows machine you might be using numpy with the Intel mkl libraries so this might be why the numpy operations are vectorized

danielhrisca
  • 665
  • 1
  • 5
  • 11
  • hi daniel. thanks for the answer. i just found this page https://software.intel.com/en-us/articles/installing-the-intel-distribution-for-python-and-intel-performance-libraries-with-pip-and seems like there is even a special numpy version for intel cpus. I will check this out - thanks so far i will give some feedback and this https://medium.com/@black_swan/using-mkl-to-boost-numpy-performance-on-ubuntu-f62781e63c38 – white91wolf Jan 10 '20 at 07:26
  • downloaded a python version provided by intel (https://software.intel.com/en-us/distribution-for-python/choose-download/linux). With this it works better. Uses 4 CPU Cores. Maybe here is still a problem with hyperthreading. – white91wolf Jan 20 '20 at 09:35