1

I am trying to run the code:

from keras.datasets import imdb as im
from keras.preprocessing import sequence as seq
from keras.models import Sequential
from keras.layers import Embedding
from keras.layers import LSTM
from keras.layers import Dense

train_set, test_set = im.load_data(num_words = 10000)
X_train, y_train = train_set
X_test, y_test = test_set

X_train_padded = seq.pad_sequences(X_train, maxlen = 100)
X_test_padded = seq.pad_sequences(X_test, maxlen = 100)

model = Sequential()
model.add(Embedding(input_dim=10000, output_dim=128))
model.add(LSTM(units=128))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
                  optimizer='sgd',
                  metrics=['accuracy'])
scores = model.fit(X_train_padded,y_train)
                   

When I run the code, it gives me a message:

I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA

To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.

I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance.

I don't understand what the issue is and what I am supposed to do next. I installed the "tenserflow" package (1.14.0) but that doesn't solve the issue.

I have looked at this reference but I don't know what I am looking for:

https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions

Can someone please help me. Thanks.

my config: osx-64, MacOS Mojave v.10.14.6, Python 3.7 with Spyder with Anaconda, conda version : 4.7.12

Community
  • 1
  • 1
Ahsan Khan
  • 11
  • 1
  • 3
  • Probably you can ignore the message. If we can trust its content, it says: If you want non-performance-critical operations to be a little faster, please compile TensorFlow by yourself. However, compiling TensorFlow isn't that easy, so I guess it's fine to do nothing. – Richard Möhn Nov 22 '19 at 05:37

2 Answers2

0

You can ignore the message and everything will work fine.

As far as I can gather from https://github.com/tensorflow/tensorflow/pull/24782/commits/7faefa4bb665e115cc744d7895a407338624993f, when TensorFlow is compiled with MKL-DNN support (which it is, according to your message), MKL-DNN will take care of using all available CPU performance features. So it doesn't matter that TensorFlow wasn't compiled to use them.

Richard Möhn
  • 712
  • 7
  • 15
-1

This might not be answering the exact question you have put, but I had a very similar error message when running a similar task.

In addition to the error message above, I also had the following error message:

OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

Error was solved with:

conda install nomkl

This is as per this stackoverflow post

  • The error message and solution you posted are not related to the original question. They will silence the message, but for the wrong reason. – Richard Möhn Nov 22 '19 at 05:34