3

I am using Spring-boot or Spring-Cloud Framework to develop a web application. The system will be mainly for processing HTTP Restful requests from client side, and then save them in to MySQL database.

But I plan to make it more expandable. It should be able to start more instance of each service and make the system can handle more incoming requests.

But I'm not sure that I am doing is right, could anyone come and help me check whether my current approach is reasonable, or raise any potential risks in my approach.

What I'm doing is:

  1. Service A receives requests in its controller, then asynchronously write them into RocketMQ. RocketMQ is used for clipping the peak.

  2. Service B then subscribe the RocketMQ topic Service A wrote into and cache the messages into Redis in format of list.

  3. Service C starts up a daemon thread checking the message numbers in Redis. If the cache list size reaches a certain value, it will pull all the messages and save them into MySQL, then flush the cache in Redis.

spencergibb
  • 24,471
  • 6
  • 69
  • 75
Xmagic
  • 189
  • 1
  • 11
  • 1
    As long as you're a) using Spring Boot, and b) want scalability ... then think "micro services". Partition your application into micro services you can deploy in a [container](https://developer.ibm.com/articles/why-should-we-use-microservices-and-containers/) (like Docker) and let Kubernetes (or equivalent) handle the "scalability". Just a thought... – paulsm4 Mar 11 '19 at 23:24

4 Answers4

5

As always there can be more solutions of a single problem. Following suggestions are based on my daily work and experience as software architect.

Facts

Your system consists of three (micro) services (A, B and C), message broker (RocketMQ), cache (Redis) and database (MySQL). In comments you also mention that you plan to run it on F5 hardware and Docker.

Suggestions

Service A is exposed in front-end to handle HTTP requests. Asynchronous processing is used to manage load, however, efficiency is still limited with Service A's performance. Therefore Service A should be scalable to enable higher throughput. Perfomance of single unit must be evaluated (take a look at performance testing, stress testing ...) to determine scaling.

To enable automated scaling of Docker containers you will need orchestration tool (such as Kubernetes) that will scale your system based on configured metrics. Also think of system resources that scaled system can use.

Also services B and C can be easily scalable. Evaluate if features of Service B and Service C could be joined in a single service. Instead of B just putting new data in Redis it could also store it in MySQL. It depends on how much fragmentation you need and how you will manage extra complexity that comes with fragmentation. B will already react on published content while Service C seems to constantly pooling Redis cache for number of entries (this could be solved with keyspace notifications).

Be careful when you read data from Redis, store it MySQL and flush it. You can easily miss or flush some data that was not stored in MySQL when or if you use one Redis key for all instances of services that writes in it.

When dealing with asynchronous processing you often deal with eventual consistency, meaning that data that Service A handles will not be available right away for other services that might want to read it from MySQL (just a thought for wider picture, importance varies from case to case).

Community
  • 1
  • 1
CAPS LOCK
  • 1,960
  • 3
  • 26
  • 41
  • Really appreciate for your kindly explanations. I use a distributed lock based on Redis for the cache storing and flushing, thus only one process can access the Redis cache at a time. I also combine the mirco-service B and C into a single service as your suggested. – Xmagic Mar 20 '19 at 09:55
2

You said you want to

make the system can handle more incoming requests.

doesn't this depends on the machine?

I think in your case you should thinking about making your applications with all services scalable.

In the cloud or build by your own.

Like Kubernetes. https://kubernetes.io/

Or KNative which is build on Kubernetes https://cloud.google.com/knative/

Also Amazon Web Services provides scalability.

DCO
  • 1,222
  • 12
  • 24
  • 1
    Thanks for your reply, but here I'd like to talk more about the system architecture level problems. We got F5 hardware for load-balancing. And the services will be deployed in docker and can be auto-scaled. – Xmagic Mar 11 '19 at 09:23
2

Simply i can divide your question into two sub topics.

But I plan to make it more expandable. It should be able to start more instance of each service and make the system can handle more incoming requests.

In order to make your application more responsive to incoming request, you need to

  • Reduce request processing time
  • Scale your system vertically or horizontally

If you consider first approach, you can simply introduce more powerful hardware, optimize transport level protocol usage or simple remove unnecessary processing steps (ex: rather than using Step B and C, you can simply introduce Kafka like message broker and reliably persist messages within it. So then you can remove the Redis dependency)

In order to optimize networking controls and protocol usage within your system please refer High Performance Browser Networking book.

For scaling, simply use Docker swarm or Kubernetes considering about the load. Most importantly you can simplify your dependencies within application for better performance and easy handling.

WMG
  • 326
  • 3
  • 9
  • Appreciate for your details reply. Your though about using Kafka and its message persisting really open my mind. – Xmagic Mar 20 '19 at 10:07
0

Hi scaling is very relative to the load you are going to handle:

But you can handle multiple request using this event bus pattern:

1) Service A: Will publish message to event bus(Topic/Exchange)
2) Broker(ActiveMq/RabbitMq/etc..): will forward these messages to the queues.
3) Service B: Listen from queue and update the records in MySQL.

Multiple instance of downstream service(Service B) will provide scalability on demand(if load is more deploy more instance if load is less deploy less instance).

enter image description here

Mradul Pandey
  • 2,034
  • 1
  • 14
  • 12