5

My question is related to this one here, but I am using PyCharm and I set up my virtual environment with Python interpreter according to this guide, page 5.

When I run my tensorflow code, I get the warning:

Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

I could ignore it, but since my model fitting is quite slow, I would like to take advantage of it. However, I do not know how to update my system here in this virtual environment PyCharm setting to make use of AVX2?

Stat Tistician
  • 813
  • 5
  • 17
  • 45

2 Answers2

2

Anaconda/conda as package management tool:

Assuming that you have installed anaconda/conda on your machine, if not follow this - https://docs.anaconda.com/anaconda/install/windows/

conda create --name tensorflow_optimized python=3.7
conda activate tensorflow_optimized

# you need intel's tensorflow version that's optimized to use SSE4.1 SSE4.2 AVX AVX2 FMA
conda install tensorflow-mkl -c anaconda

#run this to check if the installed version is using MKL, 
#which in turns uses all the optimizations that your system provide. 
python -c "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"

# you should see something like this as the output.
2020-07-14 19:19:43.059486: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.

pip3 as package management tool:

py -m venv tensorflow_optimized
.\tensorflow_optimized\Scripts\activate

#once the env is activated, you need intel's tensorflow version 
#that's optimized to use SSE4.1 SSE4.2 AVX AVX2 FMA
pip install intel-tensorflow

#run this to check if the installed version is using MKL, 
#which in turns uses all the optimizations that your system provide. 
py -c "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"

# you should see something like this as the output.
2020-07-14 19:19:43.059486: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.

Once you have this, you can set use this env in pycharm.

Before that, run where python on windows, which python on Linux and Mac when the env is activated, should give you the path for the interpreter. In Pycharm, Go to Preference -> Project: your project name -> Project interpreter -> click on settings symbol -> click on add.

enter image description here

Select System interpreter -> click on ... -> this will open a popup window which asks for location of python interpreter.

enter image description here

In the location path, paste the path from where python ->click ok enter image description here

now you should see all the packages installed in that env. enter image description here

From Next time, if you want select that interpreter for your project, Click on the lower right half where it says python3/python2 (your interpreter name) and select the one you need.

enter image description here

I'd suggest you to install Anaconda as your default package manager, as it makes your dev life easier wrt python on Windows machine, but you can make do with pip as well.

Chandan Gm
  • 408
  • 4
  • 8
  • Thanks for your post, I have some question: python=3.5. I need Python 3.7x, so best 3.7.8. Does this also work with python=3.7.8 in the code? I need to be able to do the Google Tensorflow certification exam. This does install PyCharm plugin. When I now have Conda installed, I am not sure if this still works? Where do I enter the code you have posted, so "conda create --name tensorflow_optimized python=3.5", in PyCharm? Where do I enter "where python"? – Stat Tistician Jul 14 '20 at 15:55
  • Furthermore: I already have set up a venv with normal Python 3.7.8 with some additional packages installed. When I now follow your instruction, will these packages/settings be kept? Or do I have to reinstall all the packages in that venv? – Stat Tistician Jul 14 '20 at 15:57
  • yeah, you can replace it with the version you need, anaconda would set the env with that python version. Pycharm plugin and package management doesnt have any dependency, so it will work with the new setup as well. If you'd like to go with the venv you have already created, open a terminal from the bottom status bar in pycharm, you should see your venv name in beginning like this (env_name), start from pip install. it will retain installed packages. – Chandan Gm Jul 15 '20 at 03:19
  • You can run code in the terminal provided by pycharm. Just make sure you have activated your venv or conda env. In case of pycharm setting up your venv, then in your terminal it should be pre-activated. – Chandan Gm Jul 15 '20 at 03:25
  • When I try to run pip install intel-tensorflow in PyCharm I get an error: SyntaxError: invalid syntax? – Stat Tistician Jul 16 '20 at 17:48
-1

If your CPU utilization during training stays under 100% for most of the time you should not even bother getting a different TF binary.

You might not see much if any benefit of using AVX2 (or AVX512 for that matter) depending on the workload you are running.

AVX2 is a set of CPU vector instructions of size 256(bits). Chances are, you can get at most x2 times benefit comparing to 128-bits streaming instructions. When it comes to deep learning models, they are very much memory-bandwidth bound and would not see much, if at all, benefits from switching to larger register sizes. Easy way to check it: see how long does your CPU utilization stays at 100% during training. If most of the time it is under 100% than you are probably memory (or else-wise) bound already. If your training is running on GPU and CPU is used only for data-preprocessing and occasional operations the benefit would be even less noticeable.

Back to answering your question. The best way to update TF binary to get the most out of the latest CPU architecture, CUDA version, python version and etc. would be to build tensorflow from source. Which might take a few hours of your time. That would be an official and the most robust way of solving your issue.

If you would be satisfied with using better CPU instructions you can try installing different 3-rd party binaries from wherever you can find them. Installing Conda and pointing pycharm interpreter to conda installation would be one of the options.

y.selivonchyk
  • 8,987
  • 8
  • 54
  • 77
  • Thanks for pointing out this. However, my question was basically how to build tensorflow from source INSIDE the venv in PyCharm. So how to update TF binary in the venv in PyCharm? – Stat Tistician Jul 15 '20 at 09:29