[CONCURRENT] programming refers to software paradigms where system-design allows multiple actions to be executed one independently of any other, while system resources become available, without any additional constraints (as opposed to a strictly [PARALLEL] type of system-design)
[CONCURRENT]
programming refers to a system design paradigm, where multiple actions may be scheduled to execution at any time and if more resources are available, some may happen to become executed simultaneously.
A concurrent run of a code-execution happens simultaneously (just by coincidence),
using more than one CPU or processor core
and other shared resources to execute a program
or multiple (but mutually independent)
computational units (tasks, threads, et al).
With this said, the option for them to interact with each other exists, but requires additional steps and measures so as to avoid their one-side lock inefficiencies, plus a risk of a mutual locking.
Usually, concurrent programming has to provide additional validation of correctness and robustness against uncoordinated processing units' failures, deadlocking et al, while it may -- just by a weak-coincidence -- benefit from a better resources usage (higher occupancy rate than with just a [SERIAL]
arrangement of execution), which, with respect to the need of algorithmically assure the set former robustness goals, does not assure an end-to-end performance of the concurrent processing scales linearly with an amount of resources available (ref. below - non-linear expansion of resource arbitration overheads).
Usually, concurrent programming involves control over shared resources, which is achieved by implementing "arbiters" that handle the access to those resources and distribute the resources among all processes/threads.
A good Rob Pike's speech on common misunderstanding on this subject