8

I just learned the basics of parallel-processing in Java. I read this question: Multiple threads and performance on a single CPU and wondered if there is not another reason why multiple threads might be faster than single thread on a single-core system. I was thinking about how every thread has it's own piece of memory which it uses. Imagine in Java that FXML was part of the main thread. This would likely increase the size of the main threads' memory and in turn this may slow down the thread because it has to load more values on the swap or worse, has to make more calls to the memory (I think the current threads' values are copied to cache).

To sum it up, can making multiple threads on a single-core system increase performance due to the seperated memory?

RabbitBones22
  • 302
  • 4
  • 16

1 Answers1

23

Having multiple threads on a single CPU can improve performance in the majority of cases, because in the majority of cases a thread is not busy doing computations, it is waiting for things to happen.

This includes I/O, such as waiting for a disk operation to complete, waiting for a packet to arrive from the network, waiting for user input, etc. and even some non-I/O situations, such as waiting for a different thread to signal that an event has occurred.

So, since threads spend the vast majority of their time doing nothing but waiting, they compete against each other for the CPU far less frequently than you might think.

That's why if you look at the number of active threads in a modern desktop computer you are likely to see hundreds of threads, and if you look at a server, you are likely to see thousands of threads. That's clearly a lot more than the number of cores that the computer has, and obviously, it would not be done if there was no benefit from it.

The only situation where multiple threads on a single core will not improve performance is when the threads are busy doing non-stop calculations. This tends to only happen in specialized situations, like scientific computing, cryptocurrency mining, etc.

So, multiple threads on a single-core system do usually increase performance, but this has very little to do with memory, and to the extent that it does, it has nothing to do with "separated" memory.

As a matter of fact, running multiple threads (on the same core or even different cores on the same chip) that mostly access different areas of memory (and they mostly do) tends to hurt performance, because each time the CPU switches from one thread to the other it begins to access a different set of memory locations, which are unlikely to be in the CPU's cache, so each context switch tends to be followed by a barrage of cache misses, which represent overhead. But usually, it is still worth it.

Mike Nakis
  • 56,297
  • 11
  • 110
  • 142
  • Ah! I/O is all data movement, that really clears things up. I really understand it now! – RabbitBones22 May 15 '17 at 05:51
  • How about a simple code demo - to see cases when performance is improved. I would appreciate a link to github. – Yan Khonski May 16 '17 at 15:14
  • Thanks for an explanation. I have one doubt: Suppose my process does only CPU intensive calculations (for instance, incrementing a counter thread-safely). So won't my process have a higher percentage of `CPU share` with multiple threads, thereby increasing the total throughput? – bitbyter Aug 26 '19 at 08:26
  • 1
    Your question requires a very complex answer. First of all, incrementing a counter thread-safely is not necessarily CPU intensive; it involves locking and therefore it requires cross-thread synchronization, which means that while one thread has access to the counter, all other threads are waiting doing nothing. – Mike Nakis Aug 27 '19 at 19:43
  • 1
    But set that aside. Secondly, from a global system point of view, throughput is not increased; a single process is just trying to be greedy at the expense of others. But since the CPUs are limited, it is a zero-sum game. Furthermore, this process may or may not accomplish its goal, depending on the policy of the operating system's scheduler, which may and may not give this process a larger share of CPU time. – Mike Nakis Aug 27 '19 at 19:44
  • 1
    Finally, the total throughput will **not** be increased if all threads simply take turns thread-safely incrementing the same counter; on the contrary, throughput will severely suffer. Total throughput will only increase if each thread is incrementing its very own counter. – Mike Nakis Aug 27 '19 at 19:44