1

I am developing a very busy web service which is supposed to get thousands of requests per second. I want that each request will update a counter field. How would i do that? Saving the counter in db is very important so I will not lose the information in case the server dies. I have tried the following code, but this will be a huge bottle neck for thousands requests per second. How would you do that?

public void Upload(int organizationId)
{
  try
  {
    lock (UpadateLock)
    {
       using (var db = new DbContext())
       {
         Counter counter = db.Counters.Where(c => c.OrganizationId == organizationId).FirstOrDefault();
         counter.count++;
         db.SaveChanges();

       }
     }
   }
   catch (Exception ex)
   {

   }
}
Haroon nasir
  • 648
  • 1
  • 7
  • 21
Dolev
  • 40
  • 7
  • 1
    Take a look at the similar question: https://stackoverflow.com/questions/21586702/best-implementation-of-a-counter-table-in-sql-server – kristianp May 02 '19 at 06:44
  • 1
    If you have few instances of the service your solution doesn't works: you will lose counters. The most straightforward way is to call `UPDATE..` sql on each request. But in case of `thousands of requests per second` it's not a good idea. Why this counter so important? It would be easier keep it in memory and put snapshots to database every 5 seconds in background thread. – mtkachenko May 02 '19 at 07:30
  • @mtkachenko I have thought about you idea, but I may loose some information in case the server dies. – Dolev May 02 '19 at 07:51
  • @Dolev Can you queue the request or does it need to be real time? – Aman B May 02 '19 at 08:40
  • @AmanB I can make a separate thread that waits in the queue to update the db, and also a separate counter in the cache which will also be updated. Did you mean to this? – Dolev May 02 '19 at 13:04

1 Answers1

1

If you absolutely cannot lose data when the server dies then you must write to a persistent store for each increment.

The question is just what store to use.

You can certainly do it with SQL. A simple query like this is not too expensive. You can benchmark this to see if the overhead is acceptable. Measure CPU usage on web server and SQL server. Also measure disk usage and transaction log size.

Redis might turn out to be a really good database for this. It is known to be fast and it supports server-side increments.

You can also scale out this workload by writing increments to multiple servers (randomly picking one).

You can use any other data store as well. If it does not support incrementing natively you can instead insert new rows. Then, have a background process that periodically aggregates those rows so that they don't keep accumulating.

usr
  • 168,620
  • 35
  • 240
  • 369