0

From what I am understanding from the top answers of this post ( https://stackoverflow.com/questions/16116952/can-multithreading-be-implemented-on-a-single-processor-system#:~:text=Yes%2C%20you%20can%20have%20multiple,one%20thing%20at%20a%20time.),

If I am only running one multithreaded program that creates 4 threads on a multicore CPU system with 4 cores, there is no need for scheduling as all 4 threads of my program will be running in individual cores (or microprocessors). But there maybe a need for synchronization since all 4 threads access the memory of the program (or a process) that is stored in the same address space in the main memory.

On the other hand, on a single core CPU computer. If I run the same program that creates 4 threads, I will need both synchronization and scheduling since all threads must utilize the same core (or a microprocessor).

Please correct my understanding if it is wrong.

Guy Avraham
  • 3,482
  • 3
  • 38
  • 50
Sibulele
  • 324
  • 1
  • 3
  • 12

1 Answers1

0

there is no need for scheduling as all 4 threads of my program will be running in individual cores

This is not true in practice. The OS scheduler operate in both cases. Unless you pin threads to core, threads can migrate from one core to another. In fact, even if you pin them, there are generally few other threads that can be ready on the machine (eg. ssh daemon, tty session, graphical programs, kernel threads, etc.) so the OS has to schedule them. There will be context-switches though the number will be much lower than with a single processor.

there maybe a need for synchronization since all 4 threads access the memory of the program( or a process) that is stored in the same space in the main memory.

This is true. Note that threads can also work on different memory area (so that there is no need for synchronization except when they are joined). Note also that "main memory" includes CPU caches here.

In a single core CPU computer. If I run the same program that creates 4 threads, I will need both synchronization and Scheduling since all threads must utilize the same Core ( or a microprocessor).

Overall, yes. That being said, the term "scheduling" is unclear. THere are multiple kind of scheduling: preemptive VS cooperative scheduling. Here, as a programmer, you do not need to do anything special since the scheduling is done by the OS. Thus, it is a bit unexpected to say that you "need" scheduling. The OS will schedule the threads on the same core using preemption (by allocating different time-slices for each threads on the same core).

Jérôme Richard
  • 41,678
  • 6
  • 29
  • 59
  • Thank you so much. You explained very well. Yeah you are right, in any case threads can still need to access the CPU cache and therefore synchronisation is needed. – Sibulele Aug 02 '22 at 18:45
  • But what do you mean when you say, "Threads can migrate from one core to another"? I mean, isn't this a job for the CPU to manage the communication between threads in different cores? Would not this overwrite this function of a CPU? Also, I think threads can only access memory that is allocated in their core(as it's core is made up of a Memory, CU and ALU). If they need to access some memory in the main memory, this must be well coordinated by the CPU. – Sibulele Aug 02 '22 at 18:55
  • And still, for threads in different cores to pass information, they can not pass it directly, instead they pass a pointer (or a reference( i am not sure)) to the value located in the main memory. So I am not so sure if indeed threads can anytime swap cores. By nature (i.e in their hardware form), threads exist only in that particular core. And CPU manages the rest. Please correct my understanding if it's bad. – Sibulele Aug 02 '22 at 18:57
  • Regarding thread migration this is a bit complex. The OS and the CPU work together to do that. There is no special mechanism to communicate between threads at a CPU-level except memory (and possibly interrupts but let put that aside for sake of simplicity). What the CPU mainly does it to provide atomic operations, memory barriers, virtual memory translation and **cache coherence** (as well as few special features only used by the OS). The OS is responsible for providing higher-level synchronization/communication mechanisms on top of that (eg. semaphores, locks, shared memory, etc.). – Jérôme Richard Aug 02 '22 at 20:32
  • When a thread communicate with another, it basically write in memory (and use a synchronization mechanism so to avoid *race conditions*). The other thread can then read data from memory. The binding to core does not matter much in this case (except for performance). The cache coherence ensure that thread can read/write correct data from/to memory even when a thread migrate. Note that when a thread migrate to another core, the OS execute some instruction so the CPU can react properly (eg. TLB/cache flush). – Jérôme Richard Aug 02 '22 at 20:39
  • This is a very broad and complex topic. I think books can help to understand how all of this works. For the basics, you can read the famous [What Every Programmer Should Know About Memory](https://people.freebsd.org/~lstewart/articles/cpumemory.pdf). Once this read, then you can read more about scheduling (with another book or just OS documentations). – Jérôme Richard Aug 02 '22 at 20:51