Parallel processing is, in sharp contrast to just a Concurrent processing, guaranteed to start / perform / finish all thread-level and/or instruction-level tasks executed in a parallel fashion and provides a guaranteed finish of the simultaneously executed code-paths.
Parallel processing is a stricter mode of execution of the code-units (tasks, threads...) than just a concurrent run of code-execution simultaneously (just by coincidence), using more than one CPU or processor core and other shared resources to execute a program or multiple (but mutually absolutely independent) computational units.
Parallel processing has means more than just a wish/expectation to "make a program run faster", but concentrates, from a design phase down to implementation, on orchestrating true-parallel execution on available computing architecture (CPUs, cores, RAMs, IOs, GPUs, MPPAs, &c), providing a warranty for parallelism from start, during processing and parallel-finish of the unit of code.
A professional & principled disambiguation between [PARALLEL]
and [CONCURRENT]
is needed, as true parallel code-execution requires much more than just having a few cores and a fan-out of a hord of (uncoordinated) threads, hunting for time-sharing access to a pool of system-reserved resources. Concurrent execution is simply by far not a parallel-processing. (Link)
Rob Pike has a good speech on common misunderstandings of this subject.