I have built tensorflow c++ library and test it with few projects successfully because i have got really correct prediction results but some warnings will occur each time so that i want to know the reasons which causes the following warning:
2019-07-16 10:33:52.057179: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-07-16 10:33:52.082548: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3407965000 Hz
2019-07-16 10:33:52.082883: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x44d56d0 executing computations on platform Host. Devices:
2019-07-16 10:33:52.082903: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
2019-07-16 10:33:52.557067: I tensorflow/core/common_runtime/optimization_registry.cc:35] Running all optimization passes in grouping 0. If you see this a lot, you might be extending the graph too many times (which means you modify the graph many times before execution). Try reducing graph modifications or using SavedModel to avoid any graph modification
2019-07-16 10:33:52.694202: I tensorflow/core/common_runtime/optimization_registry.cc:35] Running all optimization passes in grouping 1. If you see this a lot, you might be extending the graph too many times (which means you modify the graph many times before execution). Try reducing graph modifications or using SavedModel to avoid any graph modification
2019-07-16 10:33:53.157970: I tensorflow/core/common_runtime/optimization_registry.cc:35] Running all optimization passes in grouping 2. If you see this a lot, you might be extending the graph too many times (which means you modify the graph many times before execution). Try reducing graph modifications or using SavedModel to avoid any graph modification
2019-07-16 10:33:53.228415: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1337] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
2019-07-16 10:33:53.237208: I tensorflow/core/common_runtime/optimization_registry.cc:35] Running all optimization passes in grouping 3. If you see this a lot, you might be extending the graph too many times (which means you modify the graph many times before execution). Try reducing graph modifications or using SavedModel to avoid any graph modification
Tensorflow c++ version:1.13
Platform:Ubuntu 16.04
About the 1st warning --"
Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
" I know the solution is pip install --ignore-installed --upgrade "Download URL" from link here (enter link description here).Does this means i should download suitable URL and then upgrade it ? I have replaced my Tensorflow 1.13.1 with pip install --ignore-installed --upgrade /mypath/tensorflow-1.13.1-cp35-cp35m-linux_x86_64.whl --user (link: enter link description here) successfully ,but the warning --" Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA " is still the same.I really have no idea.
About the 2nd warning --
If you see this a lot, you might be extending the graph too many times (which means you modify the graph many times before execution). Try reducing graph modifications or using SavedModel to avoid any graph modification
I have no idea now.
About the 3rd warning --“
(One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
”,someone (link:enter link description here) tells me that i should active tensorflow's XLA for the C API like the following:
#include "c_api_experimental.h"
TF_SessionOptions* options = TF_NewSessionOptions();
TF_EnableXLACompilation(options,true);
But this solution above give me a new error when i am building my project--collect2: error: ld returned 1 exit status.
I think it tells me that there is some library should be added if i want to use TF_EnableXLACompilation().Is that right?
Any help will be much appreciated!