0

I have three or four tables in a MySQL database associated with an upcoming Android app that potentially may explode to thousands of rows very fast. At this time, I have about 6 - 8 SELECT and 2 INSERT SQL commands that will need to be done.

After doing research, I have found that I will have to use indexing to cut down on load time. I have searched for several tutorials on different sites to see if I can pick this up -- but I have found nothing that explains very clearly what and how to to do this.

Here's the situation:

First and foremast, it will be using a Godaddy MySQL server. Unlimited bandwidth and 150,000 MB. Here is one table that will be getting lots of use:

items_id (int 11)
item (100 varchar) 
cat_id (int 11)

In PHPMyAdmin it says for indexes:

Keyname/PRIMARY  type/PRIMARY        Cardinality/576    items_id

So it appears there is an index established, correct?

Here is one SQL Query (via PHP) related to this table (SELECT):

 "SELECT * FROM items WHERE cat_id = ' ".$_REQUEST['category_id']."' ORDER BY TRIM(LEADING 'The ' FROM item) ASC;"

And another (INSERT):

 "INSERT INTO items (item, cat_id) VALUES ('{$newItem}', '{$cat_id}')"

My main questions are: With these methods, am I utilizing the best speed possible and making use of the established indexes? Or does this have "slow" written all over it?

dda
  • 6,030
  • 2
  • 25
  • 34
TheLettuceMaster
  • 15,594
  • 48
  • 153
  • 259
  • **Your code is vulnerable to SQL injection.** You *really* should be using [prepared statements](http://stackoverflow.com/a/60496/623041), into which you pass your variables as parameters that do not get evaluated for SQL. If you don't know what I'm talking about, or how to fix it, read the story of [Bobby Tables](http://stackoverflow.com/questions/332365/xkcd-sql-injection-please-explain). – eggyal Jun 27 '12 at 23:37
  • Oh yes, I know this. I have not even touched validation or this topic yet. I am still in planning phase. But definitely thanks for the heads up; I still need to prepare more for that than I am capable of doing at the moment. – TheLettuceMaster Jun 27 '12 at 23:38
  • Use [`EXPLAIN`](http://dev.mysql.com/doc/en/explain.html) to determine MySQL's proposed execution plan for your queries, including what indexes it intends to use; then provide [index hints](http://dev.mysql.com/doc/en/index-hints.html) on the (rare) occasions that the query optimiser has chosen a sub-optimal plan. – eggyal Jun 27 '12 at 23:39
  • 2
    In your case, performance of your `SELECT` would be improved by creating an index on `cat_id`; a further improvement would be made by cacheing the result of the `TRIM` operation and building another index on that. – eggyal Jun 27 '12 at 23:46
  • Ok, I see now. I did an EXPLAIN and it tested as bad as it could. I added cat_id as an index and it limited it to 28 rows. So would it be wise to remove the items_id as an index if it is a unique, auto incremented column? – TheLettuceMaster Jun 27 '12 at 23:54
  • MySQL will need to maintain an index on `items_id` if you want it to *enforce* the uniqueness constraint on that column. Otherwise, scrapping that index wouldn't impair the performance of the `SELECT` you show above (and would actually marginally improve the performance of the `INSERT`), but it's possible that it would impair other queries you might wish to perform. Generally speaking, build indexes on any column(s) used in filtering, grouping, joining or ordering; then tweak if queries run slowly. – eggyal Jun 27 '12 at 23:59

1 Answers1

1

Simple selects / inserts cannot be changed to take advantage of indexes.

But indexes can be added to the tables to make the queries run faster.

Well actually inserts don't do anything with indexes unless you're using InnoDB as a storage engine and foreign key constraints.

If you're using a column in the where / group by / order by clauses of a select statement you may consider adding an index on it. A good ideea would be to use EXPLAIN on the queries in cause and see how the database engine uses the columns in the where clause.

If a column has a small set of non-unique possible values (gender: male/ female) it makes little sense to add an index for it because you won't be searching for all the females or all the males (and half a table search is not very different than a full table search). But if you use that column along with another column to filter / group / sort you may want to add a composite index (multi-column index) on them.

Databases within MySQL are organized as folders. The folders contain multiple files for each table.

There's a table definition file, a table data file and some index files. If you define an index for a column or multiple columns, a file for that index will be created.

If you don't have any indexes not even the primary key, any Select statement is going to do a full table search which for hundreds of thousands of entries becomes noticeably slow.

If you define an index it will read all the unique values in the table for that column or set of columns and write a file that lists correspondences between a certain value of that column or those columns and the records that contain it.

That file should be much smaller that the data file and should usually fit into memory entirely along side other index files. MySQL now has to intersect the matching record lists in that file to find out which records match the select criteria and then cherry-pick the data it needs from the data table.

Primary and Unique indexes have a direct correspondence between one value and one record. So searching by unique value is fast.

Mihai Stancu
  • 15,848
  • 2
  • 33
  • 51