0

I have a rather simple query

SELECT col1 FROM table1

This command outputs about 300K+ entries per second, which sounds reasonable, yet other methods for mass decoding of saved data (e.g. array deserialization) works at the limit of what the hard drive enables, i.e. 40-50 MB/s, compared with 2-3 MB/s with MySQL.

I see that my MySQL machine is CPU bound, and I reckoned there must be a simpler solution to continue to use MySQL and enjoy double figure speed throughput (for queries complex in the same order of magnitude, of course) without spending a fortune on hardware

Any ideas how to fiddle with the server to make this happen?


EDIT: I use the MyISAM engine


EDIT: The server is already a decent machine so I'm not trying to pinch my pennies here...

Dani
  • 4,267
  • 4
  • 29
  • 37
  • Why are you putting serialized data into your DB in the first place? It looks like you're not filtering, searching, sorting, joining or doing anything else that makes use of the DB. Unless there's some details you left out, the best solution might be to stick with the flat-file import. – Sean McSomething Jan 12 '09 at 17:51

3 Answers3

4

Are you using the right tool for your problem? MySQL is a relational database. It excels in queries that involve relations between columns. The type of query you describe does not make use of the database's strong points, and still pays its overheads. MySQL does not guarantee speed - it enables persistent storage with complex querying abilities, at a reasonable speed. Maybe you need a different data architecture. Or, if the query you describe is not representative, here is a book about mysql performance, and the respective blog.

Yuval F
  • 20,565
  • 5
  • 44
  • 69
3

Store the database in memory. Makes it Really fast!

Filip Ekberg
  • 36,033
  • 20
  • 126
  • 183
1

Databases more often make the machine I/O bound.

If your machine is CPU bound, then I don't think the database is your bottleneck.

Bill Karwin
  • 538,548
  • 86
  • 673
  • 828