From a systems design/scalability perspective, what are some industry-standard strategies in dealing with a system that requires heavy writes to a particular table in a DB.
For simplicity sake, let's say the table is an inventory table for products, and has a column 'Product Name', and a column 'Count', and it simply increments by +1 each time a new Product is bought into the system. And there are millions of users buying different products every second and we have to keep track of the latest count of each product, but it does not have to be strictly realtime, maybe a 5 min lag is acceptable.
My options are:
Master slave replication, where master DB handles all writes, and slaves handles reads. But this doesn't address the write-heavy problem
Sharding the DB based on product name range, or its hashed value. But what if there's a specific product (eg Apple) that receives large number of updates in a short time, it'll still hit the same DB.
Batched updates? Use some kind of caching and write to table every X number of seconds with a cumulative counts of whatever we've received in those X seconds? Is that a valid option, and what caching mechanism do I use? And what if there's a crash between the last read and next write? How do I recover the lost count?
Any other obvious choices I forgot about?
Any insight is appreciated!