-1

I am developing a GPS device based application in Cakephp 2.0 and mysql (InnoDB). Each device sends data every minute to the Db and I need to make it scalable to a very large number of devices simultaneously to the server.

I have not used BIGINT auto increment as primary key because there is a limit and max value beyond which the limit to BIGINT will be reached and the system will fall apart, even if far away.

I created the primary key as char(36) and generated UUID from php and started storing the data. Primarily because the limit and uniqueness of the primary key will never stop and the design will never fail.

Uniqueness is the only reason for me and nothing else.

Problems:

  1. The system is in pilot testing mode and time to insert the data has increased to very large extent. Refer to http://kccoder.com/mysql/uuid-vs-int-insert-performance/ This is exactly happening in my case where by time, the time to insert the data is increasing and i expect that it might get worse in the coming days as the number of data keeps increasing. There are around 2,00,000 data in the table now.\
  2. The primary key being char(36), i assume the performance is getting effected in inserts, select statements and joins.

My idea is to replace the primary key UUID column with a varchar(50) column and have Device ID / IMEI Number + timestamp stored as primary key and they will always be unique. But on the downside, it's again a varchar field and performance issues on long run.

What is the best option for me in the long run?

halfer
  • 19,824
  • 17
  • 99
  • 186
user2055396
  • 59
  • 1
  • 9
  • If you have 1000000 devices it would take 3500+ years to use it all... – PeeHaa Dec 23 '14 at 18:55
  • This question appears to be off-topic because it is about an incorrect assumption about mysql datatypes and cannot be satisfyingly answered because of this. – PeeHaa Dec 23 '14 at 18:56
  • 1
    My suggestion: Let's keep this question closed for two days. As you're in pilot phase it can be easily validated if your expectations are met or not. Until then I would suggest to rely on related existing material on site to get a more educated guess about the Mysql column types and how they might behave. – hakre Dec 23 '14 at 18:59
  • 2
    In what way has the "time to insert the data has increased to very large extent"? Can we see some timings of inserts/sec starting from zero rows, and then starting from 200K rows? What indexes do you have on this table? What hardware are you running on? How fast do you really need it to be? – halfer Dec 23 '14 at 19:03

1 Answers1

4

Just use BIGINT UNSIGNED.

An unsigned INT64 is something like 4.000.000.000 times 4.000.000.000. So assuming you have one device for each of the less than 8 billion people on the planet logging once per second, that leaves you with 2 billion seconds or more than 63 years. I assume, that in 63 years an INT128 is normal (or small).

I am also quite sure, that you will run into very different classes of trouble long before you reach 2^64 rows in a single table.

Eugen Rieck
  • 64,175
  • 10
  • 70
  • 92