95

Can somebody please explain the following TensorFlow terms

  1. inter_op_parallelism_threads

  2. intra_op_parallelism_threads

or, please, provide links to the right source of explanation.

I have conducted a few tests by changing the parameters, but the results have not been consistent to arrive at a conclusion.

nbro
  • 15,395
  • 32
  • 113
  • 196
itsamineral
  • 1,369
  • 3
  • 14
  • 19

4 Answers4

95

The inter_op_parallelism_threads and intra_op_parallelism_threads options are documented in the source of the tf.ConfigProto protocol buffer. These options configure two thread pools used by TensorFlow to parallelize execution, as the comments describe:

// The execution of an individual op (for some op types) can be
// parallelized on a pool of intra_op_parallelism_threads.
// 0 means the system picks an appropriate number.
int32 intra_op_parallelism_threads = 2;

// Nodes that perform blocking operations are enqueued on a pool of
// inter_op_parallelism_threads available in each process.
//
// 0 means the system picks an appropriate number.
//
// Note that the first Session created in the process sets the
// number of threads for all future sessions unless use_per_session_threads is
// true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;

There are several possible forms of parallelism when running a TensorFlow graph, and these options provide some control multi-core CPU parallelism:

  • If you have an operation that can be parallelized internally, such as matrix multiplication (tf.matmul()) or a reduction (e.g. tf.reduce_sum()), TensorFlow will execute it by scheduling tasks in a thread pool with intra_op_parallelism_threads threads. This configuration option, therefore, controls the maximum parallel speedup for a single operation. Note that if you run multiple operations in parallel, these operations will share this thread pool.

  • If you have many operations that are independent in your TensorFlow graph— because there is no directed path between them in the dataflow graph— TensorFlow will attempt to run them concurrently, using a thread pool with inter_op_parallelism_threads threads. If those operations have a multithreaded implementation, they will (in most cases) share the same thread pool for intra-op parallelism.

Finally, both configuration options take a default value of 0, which means "the system picks an appropriate number." Currently, this means that each thread pool will have one thread per CPU core in your machine.

Innat
  • 16,113
  • 6
  • 53
  • 101
mrry
  • 125,488
  • 26
  • 399
  • 400
  • Can this be used to parallelise my code over multiple CPUs? How can I use these functions to achieve fault tolerance in the event that one of the machines fails in the cluster? – itsamineral Dec 20 '16 at 09:51
  • 5
    These options control the maximum amount of parallelism you can get from running your TensorFlow graph. However, they rely on the operations that you run having parallel implementations (like many of the standard kernels do) for intra-op parallelism; and the availability of independent ops to run in the graph for inter-op parallelism. However, if (for example) your graph is a linear chain of operations, and those operations have only serial implementations, then these options won't add parallelism. The options are not related to fault tolerance (or distributed execution). – mrry Dec 20 '16 at 15:31
  • 3
    It seems the two options only work for CPUs but not GPUs? If I had tf.add_n operator of multiple parallel matrix multiplication based operations and run in GPUs, how is the parallelization done in default and can I control it? – chentingpc Apr 30 '17 at 03:55
  • 1
    How much does setting both values to 1 affect the speed? Does setting both to one mean that tensorflow will use only one thread? (I just tried and I can see all my cores usage going up once I start training and I don't really see a difference in speed) – Martin Thoma Aug 07 '18 at 14:55
  • 1
    @mrry So if I understand the answer correctly, `intra` controls the number of cores (within 1 node), and `inter` controls the number of nodes, right? Or loosely speaking, `intra` works like OpenMP, and `inter` works like OpenMPI? Please correct me if I am wrong. – Bs He Oct 19 '18 at 18:39
  • and if the two settings apply to both CPU and GPU? Thanks. – Bs He Oct 19 '18 at 20:28
  • What does 'blocking' mean? Normally, there's no IO, only Tensor calculation, so I don't expect 'blocking' to mean IO blocking. – Joshua Chia Apr 06 '19 at 04:30
  • @mrry When we leave it to the default of 0, the system picks the appropriate number for one session as a whole or varies the number for every op that can be parallelized? – Shibani Oct 31 '19 at 18:42
  • @mrry, could you tell me those two parameters relation with OMP_NUM_THREADS? thanks a lot. – lxy Feb 14 '20 at 01:31
  • These settings only control number of thread in one CPU? so If I have say 8CPU with 2 threads each setting to 16 will use all threads within all CPU? Also is this only multi-processing operations such as matrix multiplication and reduce_sum instead of training in distributed manner (ex: data parallelism)? – haneulkim Aug 03 '23 at 05:40
20

To get the best performance from a machine, change the parallelism threads and OpenMP settings as below for the tensorflow backend (from here):

import tensorflow as tf

#Assume that the number of cores per socket in the machine is denoted as NUM_PARALLEL_EXEC_UNITS
#  when NUM_PARALLEL_EXEC_UNITS=0 the system chooses appropriate settings 

config = tf.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS, 
                        inter_op_parallelism_threads=2, 
                        allow_soft_placement=True,
                        device_count = {'CPU': NUM_PARALLEL_EXEC_UNITS})

session = tf.Session(config=config)

Answer to the comment bellow: [source]

allow_soft_placement=True

If you would like TensorFlow to automatically choose an existing and supported device to run the operations in case the specified one doesn't exist, you can set allow_soft_placement to True in the configuration option when creating the session. In simple words it allows dynamic allocation of GPU memory.

mrk
  • 8,059
  • 3
  • 56
  • 78
3

Tensorflow 2.0 Compatible Answer: If we want to execute in Graph Mode of Tensorflow Version 2.0, the function in which we can configure inter_op_parallelism_threads and intra_op_parallelism_threads is

tf.compat.v1.ConfigProto.

-1

Work for me

import tensorflow as tf

tf.config.threading.set_inter_op_parallelism_threads(1)
tf.config.threading.set_intra_op_parallelism_threads(1)
  • 1
    Remember that Stack Overflow isn't just intended to solve the immediate problem, but also to help future readers find solutions to similar problems, which requires understanding the underlying code. This is especially important for members of our community who are beginners, and not familiar with the syntax. Given that, **can you [edit] your answer to include an explanation of what you're doing** and why you believe it is the best approach? – Jeremy Caney Jun 05 '23 at 20:20