0

Let me explain the context first : I am building a visit tracker, with PHP and MySQL. So when a user visit a certain URL, his informations will be registered, then he will be redirected to a page. Then, when he will click on a link, I will register the information then redirect the user to his destination.

So I need to WRITE informations in the database at the moment of the visit. And I need to READ and WRITE informations at the moment of the click.

My problem is that I will have many many rows to save in the database. And since I need to read and write very quickly (so that the user will be redirected as fast as possible, that's the main concern), I am thinking about partitioning my tables.

But what will be the fastest way of doing it ? I was thinking about two solutions :

Partitioning the table with MySQL

Basically just doing a partition on the table, based on the range of the ID registered. When I will query data, MySQL will take care of it directly, and it will be transparent from the PHP side.

But since I don't know how many rows I will have to register, how can I efficiently define the range ? And what about when I reach the limit of partitions ?

Creating new tables when needed

If I have the actual number of rows cached in memory, I can know that every 100.000 rows, I need to create a new table. So a little before I reach the limit, I just have to create a new table. And from a given ID, I just have to divide by 100.000 to know the table I need to query.

This way, I will not have any problems about a maximum number of partitions.

But what will be the most efficient in my situation ? Can a partition system based on PHP (that's basically what I intend to do in solution #2) be more efficient than the one from MySQL ? Or should I rely on MySQL and if I ever reach a limit, create another partitioned table ?

Community
  • 1
  • 1
hellimac
  • 251
  • 7
  • 13
  • 1
    What's your scale? You may be over-architecting this problem or not even looking at the proper tools. MySQL with good hardware and the proper indexes will perform pretty good in most circumstances. You may want to consider making the writing and reading asynchronous so you don't have to wait for everything to happen before redirecting the user. You can also cache as much of the information as possible to not even hit the DB during the read portion. – PressingOnAlways Jan 25 '16 at 04:23
  • To answer your questions : I can't write or read asynchronously, because the results of the redirects depends of the user data (country, ip...). I am speaking here of hundred of millions of values (one or more millions visits by month, and I need to save the datas for years !) – hellimac Jan 25 '16 at 04:30
  • How many requests per second? 50 simple _synchronous_ operations per second should be easy to achieve with a simple, single-table, single-server, design. If you need 1000/sec, then give us more details so we can address your question. – Rick James Jan 25 '16 at 18:02
  • Even your manual cache is "over-architecting"; MySQL does its own caching. A Rule of Thumb: "Don't put a cache in front of a cache." – Rick James Jan 25 '16 at 18:03

0 Answers0