3

I have a database with over 10,000,000 rows. Querying it right now can take a few seconds just to find some basic information. This isn't preferable, I know that the best way to optimize is to minimize the number of rows which is possible, but right now I don't have the time to do this.

What's the easiest way to optimize a MySQL database so that when querying it, the time taken is short?

I don't mind about the size of the database, that doesn't really matter so any optimizations that increase the size are fine. I'm not very good with optimization, right now I have indexes set up, but I'm not sure how much better I can get from there.

I'll eventually trim down the database properly, but is there a quick temporary solution?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
sam
  • 5,459
  • 6
  • 32
  • 53
  • Maybe the problem are the Query... Maybe it's the server hardware that is not enough porwerful, I think we need more information. – Michael B. Mar 04 '10 at 17:57
  • Is there a way to improve simple select queries? The queries are literally "SELECT * FROM table WHERE column="something"". Hardware wise, maybe, but the server isn't running at max capacity so I'm unsure if it's that, although obviously better hardware would be good :-D – sam Mar 04 '10 at 18:09
  • Use EXPLAIN on your queries to find out how they are executed by your database server. With this information, you can create the indexes you need. 10M records is not that much, should not give a serious problem. Unless they are 1GB each... ;) – Frank Heikens Mar 04 '10 at 18:52
  • i run complicated queries on over 100 million rows in under 0.5 secs but i have a properly designed database that utilises innodb clustered indexes, secondary indexes, denormalisation, rollup tables, correct datatypes (tinyint unsigned 1byte vs int unsigned 4 bytes), proper mysql configuration and a host of other optimisations. (partitioning in my experience has slowed my performance) 10 million is nothing so as Frank Heikens said above - show us your queries and explain plans so we can comment further. – Jon Black Mar 24 '10 at 15:49

6 Answers6

2

Besides indexing which has already been suggested, you may want to also look into partitioning tables if they are large.

Partitioning in MySQL

It's tough to be specific here, because we have very limited information, but proper indexing along with partitioning can go a very long way. Indexing properly can be a long subject, but in a very general sense you'll want to index columns you query against.

For example, say you have a table of employees, and you have your usual columns of SSN, FNAME, LNAME. In addition to those columns, we'll say that you have an additional 10 columns in the table as well.

Now you have this query:

SELECT FNAME, LNAME FROM EMPLOYEES WHERE SSN = 'blah';

Ignoring the fact that the SSN could likely be the primary key here and may already have a unique index on it, you would likely see a performance benefit by creating another composite index containing the columns (SSN, FNAME, LNAME). The reason this is beneficial is because the database can satisfy this query by simply looking at the composite index because it contains all the values needed in a sorted and compact space. (that is, less I/O). Even though the index on SSN only is a better access method to doing a full table scan, the database still has to read the data blocks for the index (I/O), find the value(s) which will contain pointers to the records needed to satisfy the query, then will need to read different data blocks (read: more random I/O) in order to retrieve the actual values for fname and lname.

This is obviously very simplified, but using indexes in this way can drastically reduce I/O and increase performance of your database.

Some other links here you may find helpful:

Community
  • 1
  • 1
RC.
  • 27,409
  • 9
  • 73
  • 93
2

As I can see you request 40k rows from the database, this load of data needs time just to be transferred.

Also, never ask "how to improve in general". There is no way of "general" optimization. Optimization is always result of profiling and research of your particular case.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Your Common Sense
  • 156,878
  • 40
  • 214
  • 345
1

Use indexes on columns you search on very often.

Felix Kling
  • 795,719
  • 175
  • 1,089
  • 1,143
  • Already doing this! Is there a specific way to do it that is "better"? Right now I have the most actively searched (eg: WHERE x = y, where x is the column I've indexed) columns indexed. – sam Mar 04 '10 at 18:01
1

In your example, 'WHERE x=y', if y is column name, create an index with y also.

The key with index is the # of result from your select query should be around 3% ~ 5% comparing entire table and it will be faster.

Also archieving table helps. I do not know how to do this, mostly DBA task. For DBA it is simple task if they have been doing this.

exiter2000
  • 548
  • 5
  • 14
1

If you're doing ordering or complex queries you may need to use multi-column indexes. For example if you're searching where x.name = 'y' OR x.phone = 'z' it might be worth putting an index on name,phone. Simplified example, but if you need to do this you'll need to research it further anyway :)

spronkey
  • 422
  • 3
  • 10
0

Are your queries using your indexes? What does running an EXPLAIN on your select queries tell you?

The first (and easiest) step will be making sure your queries are optimized.

jasonbar
  • 13,333
  • 4
  • 38
  • 46
  • I've just ran "ANALYZE TABLE" to figure out some more information, but explain on a basic query gives: http://pastebin.com/MfNEiqBy – sam Mar 04 '10 at 18:02