1

I understand roughly the difference between parallel computing and concurrent computing. Please correct me if I am wrong.

Parallel Computing

A system is said to be parallel if it can support two or more actions executing simultaneous. In parallel programming, efficiency is the major concern.

Concurrent Computing

A system is said to be concurrent if it can support two or more actions in progress at the same time. Though, multiple actions are not necessarily to be executed simultaneously in concurrent programming. In concurrent programming, modularity, responsiveness and maintainability are important

I am wondering what is going to happen if I execute parallel programming code inside a multi-threaded program? e.g. using Java's parallel Stream in a multi-threaded server program.

Would the program actually be more efficient?

My initial thought is that it might not be a good idea, since a somehow-optimized multi-threading program should already have the threads occupied. Parallelism here may give extra overhead.

chakwok
  • 980
  • 8
  • 21
  • 1
    What relevance does “server” have in your question? – Holger Jun 28 '19 at 10:58
  • @Holger server is designed to scale. Response time is critical. Traffic might fluctuate. Likely to be asynchronous. – chakwok Jun 28 '19 at 11:12
  • @BenR. I am not comparing the difference, but discussing the effect of using one on top of the other. – chakwok Jun 28 '19 at 11:14
  • 1
    Indeed, you are not comparing the differences. In fact, the first ⅔ of your question are entirely irrelevant to the question. – Holger Jun 28 '19 at 12:41

2 Answers2

3

The crucial difference between concurrency and parallelism is that concurrency is about dealing with a lot of things at same time (gives the illusion of simultaneity) or handling concurrent events essentially hiding latency. On the contrary, parallelism is about doing a lot of things at the same time for increasing the speed.

Both have different requirement and use case.

Parallelism is used to achieve run time performance and efficiency , yes it will add some additional overhead to system (Cpu,ram etc) because of its nature. but this is significantly used concept in today's multi core technology.

Mayur Jain
  • 149
  • 5
1

I am wondering what is going to happen if I execute parallel programming code inside a multi-threaded program? e.g. using Java's parallel Stream in a multi-threaded server program.

Based on my limited knowledge of Java runtime, every program is already multithreaded, the application entry point is the main thread which runs along side other run time threads (gc).

Suppose your application spawns two threads, and in one of those threads a parallelStream is created. It looks like the parallelStreams api use a ForkJoinPool.commonPool which starts NUM_PROCESSES - 1 threads. At this point your application may have more threads than CPUs so if your parallelStream computation is CPU bound than you're already oversubscribed on threads -> CPU.

https://stackoverflow.com/a/21172732/594589

I'm not familiar with java but it's interesting that parallelStream shares the same thread pool. So if your program spawned another thread and started another parallelStream, the second parallelStream would share the underlying thread pool threads with the first!

In my experiences I've found it's important to consider:

  • The type of workload your application is performing (CPU vs IO)
  • The type of concurrency primitives available (threads, processes, green threads, epoll aysyncio, etc)
  • Your system resources (ie #CPU's available)
  • How your applications concurrency primitives map to the underlying OS resources
  • The # of concurrency primitives that your application has at any given time

Would the program actually be more efficient?

It completely depends, and the only for sure answer is to benchmark on your target architecture/system with the two solutions.


In my experiences reasoning about complex concurrency beyond basic patterns becomes much of a shot in the dark. I believe that this is where the saying:

Make it work, make it right, make it fast.
-- Kent Beck

comes from. In this case make sure that your program is concurrent safe (make it right) and free of deadlocks. And then begin testing, benchmarking and running experiments.

In my limited personal experiences I have found analysis to largely fall apart beyond characterizing your applications workload (CPU vs IO) and finding a way to model it so you can scale out to utilize your systems full resources in a configurable benchmark able way.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
dm03514
  • 54,664
  • 18
  • 108
  • 145