2

We have web service APIs to support clients running on ten millions devices. Normally clients call server once a day. That is about 116 clients seen per second. For each client (each with unique ID), it may make several APIs calls concurrently. However, Server can only process those API calls one by one from the same client. Because, those API calls will update the same document of that client in the backend Mongodb database. For example: need to update last seen time and other embedded documents in the document of this client.

One solution I have is to put synchronized block on an "intern" object representing this client's unique ID. That will allow only one request from the same client obtains the lock and be processed at the same time. In addition, requests from other clients can be processed at the same time too. But, this solution requires to turn on load balancer's "stickiness". That means load balancer will route all requests from the same ip address to a specific server within a preset time interval (e.g. 15 minute). I am not sure if this has any impact to the robustness in the whole system design. One thing I can think of is that some clients may make more requests and make the load not balanced (create hotspots).

Solution #1:

Interner<Key> myIdInterner = Interners.newWeakInterner();

public ResponseType1 processApi1(String clientUniqueId, RequestType1 request) {
    synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
        // code to process request
    }
}

public ResponseType2 processApi2(String clientUniqueId, RequestType2 request) {
    synchronized(myIdInterner.intern(new Key(clientUniqueId))) {
        // code to process request
    }
}

You can see my other question for this solution in detail: Should I use Java String Pool for synchronization based on unique customer id?

The second solution I am thinking is to somehow lock the document (Mongodb) of that client (I have not found a good example to do that yet). Then, I don't need to touch load balancer setting. But, I have concerns on this approach as I think the performance (round trips to Mongodb server and busy waiting?) will be much worse compared to solution #1.

Solution #2:

public ResponseType1 processApi1(String clientUniqueId, RequestType1 request) {
    try {
        obtainDocumentLock(new Key(clientUniqueId));
        // code to process request
    } finally {
        releaseDocumentLock(new Key(clientUniqueId));       
    }   
}

public ResponseType2 processApi2(String clientUniqueId, RequestType2 request) {
    try {
        obtainDocumentLock(new Key(clientUniqueId));
        // code to process request
    } finally {
        releaseDocumentLock(new Key(clientUniqueId));       
    }   
}

I believe this is very common issue in a scalable and high concurrent system. How do you solve this issue? Is there any other option? What I want to achieve is to be able to process one request at a time for those requests from the same client. Please be noted that just controlling the read/write access to database does not work. The solution need to control the exclusive processing of the whole request.

For example, there are two requests: request #1 and request #2. Request #1 read the document of the client, update one field of a sub-document #5, and save the whole document back. Request #2 read the same document, update one field of sub-document #8, and save the whole document back. At this moment, we will get an OptimisticLockingFailureException because we use @Version annotation from spring-data-mongodb to detect version conflict. So, it is imperative to process only one request from the same client at any time.

P.S. Any suggestion in selecting solution #1 (lock on single process/instance with load balancer stickiness turned on) or solution #2 (distributed lock) for a scalable, and high concurrent system design. The goal is to support tens of millions clients with concurrently hundreds of clients access the system per second.

Raymond
  • 115
  • 2
  • 11

3 Answers3

0

Why not just create a processing queue in Mongodb whereby you submit client request documents, and then another server process that consumes them, produces a resulting document, that the client waits for... synchronize the data with clientId, and avoid that activity in the API submission step. The 2nd part of the client submission activity (when finished) just polls Mongodb for consumed records looking for their API / ClientID and some job tag. That way, you can scale out the API submission, and separately the API consumption activities on separate servers etc.

gslender
  • 23
  • 1
  • 4
  • Thanks for the feedback. But, unfortunately, our web service is not a pure "producer/consumer" scenario. Server needs to return response for every API call to, at least, indicate the API calls are successful or not. For some APIs, server may return some more information for client to consume. To make processing queue a possible solution, API need to change to include both submission API and status check API. That complicate the code logic on the client side and it's not possible at this time. We are looking for server side only solution due to the need to support old clients. – Raymond Jul 25 '17 at 04:21
  • I think you've misunderstood my suggestion, the webservice API would handle a client request, and then queue is like a cache whereby the current read (if cache of record empty) is direct to the Mongodb, but writes are cached and queue is shared. So you're not tying up Mongodb for writes, only reads (which shouldn't really block) - then have another process consume the queue which persistently writes documents back to Mondodb emptying the queue... as work increases faster than consuming, then the queue grows (memory needed) and the writes optimised. – gslender Jul 25 '17 at 07:09
  • Hi gslender, if I understand correctly, your proposal is to create a "write operation" cache to synchronize write operations. But, it won't work in our scenario. Because, we need exclusive processing of the whole request, not just to avoid concurrent writing. I just update the question to clarify my intention. By the way, Mongodb update is atomic already. – Raymond Jul 25 '17 at 16:08
0

In your solution, you are doing a lock split based on customer id so two customers can process the service same time. The only problem is the sticky session. One solution can be to use distributed lock so you can dispatch any request to any server and the server gets the lock process. Only one consideration is it involves remote calls. We are using hazelcast/Ignite and it is working very well for average number of nodes. Hazelcast

Jahir
  • 630
  • 1
  • 11
  • 25
gati sahu
  • 2,576
  • 2
  • 10
  • 16
  • Thanks for your feedback. You direct me to the right direction. The intention of my solution #2 just try to achieve distributed lock. Now, I can do some research on distributed lock. By the way, as you mentioned, distributed lock involves remote call. So, the performance should not as good as the solution #1. In your opinion, is it worth to use distributed lock? Does load balancer stickiness cause any issue in scalablility? I want to know pros and cons of solution #1 and solution #2 (if possible other solution). So, I can choose a suitable one for our use case. – Raymond Jul 25 '17 at 17:30
  • Plz try hazelcast or ignite these are open source ,we are using and performing well still will suggest to do some poc for ur use case – gati sahu Jul 25 '17 at 17:42
  • Just think of one good benefit of the distributed locking (i.e. Solution #2). If we are moving into micro service architecture, then all API services may run in their own processes. So, solution #1 won't work in that case. – Raymond Jul 25 '17 at 19:07
0

One obvious approach is simply to implement the full optimistic locking algorithm on your end.

That is, you get sometimes get OptimisticLockingFailureException when there are concurrent modifications, but that's fine: just re-read the document and start the modification that failed over again. You'll get the same effect as if you had used locking. Essentially you are leveraging the concurrency control already built-in to MongoDB. This also has the advantage of getting several transactions go through from the same client if they don't conflict (e.g., one is a read, or they write to different documents), potentially increasing the concurrency of your system. On other hand, you have to implement the re-try logic.

If you do want to lock on a per-client basis (or per-document or whatever else) and your server is a single process (which is implied by your suggested approach) you just need a lock manager that works on arbitrary String keys, which has several reasonable solutions including the Interner one your mentioned.

BeeOnRope
  • 60,350
  • 16
  • 207
  • 386
  • Yes, that came through my mind. My colleague actually implemented the reconciliation code logic when OptimisticLockingFailureException happened. The reconciliation code logic is complex because one request can modify multiple collections (currently only one collection has version conflict issue). The reconciliation code logic will become harder to maintain when business logic changes and more new features added. Unfortunately, Mongodb does not support rollback. That's why I am looking for locking solution. – Raymond Jul 25 '17 at 17:56