-1

Let's say there is a function(set of instructions foo()) which manipulates some data(global). Two threads are spawned by process each executing this function(foo()). How to handle concurrency and data protection (race condition) when these two thread executes at the same time on two different processor cores?

For concurrency and data protection, what are the main difference between single core and multi-core processor for above case?

Sunil Shahu
  • 946
  • 1
  • 9
  • 24
  • In a user-mode setting, multi-core multiprocessing is somewhat like the worst-case single-core multiprocessing. There are many ways to handle concurrency. One of them is using C11's [``](https://en.cppreference.com/w/c/atomic). On POSIX, there are [pthread mutexes](https://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_mutex_lock.html) – Thomas Jager Aug 01 '19 at 13:31
  • The main difference between the two cases is the cache. However, if you take all the right memory-protection precautions, with mutexes in all the right places, one would expect the implementation to ensure that memory barriers and flushing of the caches are used. – Thomas Jager Aug 01 '19 at 13:42
  • 2
    Books have been written on this topic. The question is *far* too broad. – John Bollinger Aug 01 '19 at 13:44
  • I learned most of what I know about writing threaded programs in C from "Unix Network Programming, Volume 1: The Sockets Networking API (3rd Edition)". –  Aug 01 '19 at 13:45

1 Answers1

1

It is a very broad question. There are different techniques to address this issue. From users point of view, the very basic idea is to acquire lock on the global data (eg: mutexin POSIX ), perform some update and release the lock. Even though you acquire the lock, there can be inconsistency in other threads for reading the same data (See Readers-writers) problem. As a result you have to access read lock (See here). This is the basic idea to handle concurrency.

At present, concurrency and data protection for a multicore systems or single core threads are similar because all cores can see all the memory(except local cache). If proper locking mechanisms are implemented then the OS will take care of cache coherence. Basically, from users point of view threading in single core and multiple cores are same.

j23
  • 3,139
  • 1
  • 6
  • 13