While performing the load testing of the cadence cluster, we are seeing extremely high number of threads ( > 4000) and it constantly remains like this consuming high CPU and memory. It remains like this, when there are no external signals or active workflows. I have disabled the sticky workflow options so no workflow should be cached, instead the thread count is not going down. Is there any way to resolve this further ?
Asked
Active
Viewed 230 times
2
-
Is it in Cadence server cluster or client worker? – Long Quanzheng Jun 11 '22 at 00:57
-
this is happening at client worker – Ezio Jun 11 '22 at 18:32
-
Which sdk are you using? – Long Quanzheng Jun 11 '22 at 22:34
-
3.6.2, I think this is the latest one – Ezio Jun 12 '22 at 07:38
-
@LongQuanzheng, if I am using this SDK in a spring boot application, should I create a single instance of WorkflowClient to be used in the entire application, right now I am creating a client whenever there is a new request. Can this cause the excessive thread count and JVM heap space issue ? – Ezio Jun 12 '22 at 07:39
-
Like you said in the reply(repeat here in case people want to get an answer here): the client should be shared across the whole jvm process and it’s safe to share – Long Quanzheng Jun 15 '22 at 06:20
1 Answers
1
I was able to identify the issue. The issue was due to creation of multiple WorkflowClient
per request. Each workflow client bootstraps its own tasklist and thread pool to dispatch tasks.
In java based applications, we should create single workflow clients for each flow that we want to orchestrate. For example I was creating workflows to orchestrate order shipments in an e-commerce application, so I created single workflow clients, one for forward journey and other for reverse. It worked like a charm.

Ezio
- 2,837
- 2
- 29
- 45