1

So when getting around 2000 MySQL rows in JSON format it takes around 10 seconds to return the data upon my AJAX call.
9 seconds of waiting time (TTFB)
1 second to download content...

Does anyone know how to speed this up?

Currently I'm getting all results of the MySQL search (without any LIMIT), and I created pages by slicing the JSON data.

I guess one way to reduce the payload is to send the page number through the AJAX call and LIMIT the MySQL search, but is there really no other way?

Down below is my original MySQL search query.

It's complex because I have a single search box that searches for the search keyword in my Database in several fields like the title, the categories, etc...
Everywhere it says LIKE '%%' is where the search term goes.
However on a selection for the category it filters the MySQL search in a different place: see LIKE '%health-and-beauty%' and the LIKE '%%' stays empty.

SELECT wpp.ID, post_title, wp_terms.name AS category, wp_terms.slug AS slug, supplier_company, 
        GROUP_CONCAT(wp_terms.slug SEPARATOR ', ') AS allslug,
        GROUP_CONCAT(wp_terms.name SEPARATOR ', ') AS allcatname
        FROM wp_posts AS wpp
        LEFT JOIN wp_term_relationships ON wpp.ID = object_id
        LEFT JOIN wp_terms ON term_taxonomy_id = wp_terms.term_id
        LEFT JOIN wp_teleapo_supplier AS s ON wpp.post_author = s.ID



        /* BASIC SEARCH on normal fields */
        WHERE post_type = 'post' 


        GROUP BY wpp.ID

        /* SEARCH on CONCAT FIELDS*/
        HAVING

        (post_title LIKE '%%'
        OR allcatname LIKE '%%'
        OR allslug LIKE '%%'
        OR supplier_company LIKE '%%')
        AND (allslug LIKE '%health-and-beauty%'
        ) AND (allslug LIKE '%%'
        ) AND 

        /* ADD EXTRA SEARCH TAGS: */
        /* Language tag */
         allslug LIKE '%english%' 
        /* ..... tag */

        /* AND allslug LIKE '%......... %' */
        ORDER BY post_date DESC 
mesqueeb
  • 5,277
  • 5
  • 44
  • 77
  • What is the query? What indexes are on the table(s)? Have you profiled your code to determine where the system is spending its time? – pojo-guy Mar 17 '16 at 03:09
  • I would love to be able to determine where the time is being spent, but I don't know how to do that! Hold on let me update the question with my MySQL query (it's rather complex) wait 2 sec... But what do you mean with indexes on the tables? – mesqueeb Mar 17 '16 at 03:13
  • right index and limit clause will surely speed up response time – Sanj Mar 17 '16 at 03:21
  • @pojo-guy I added the MySQL query. What do you mean by `what indexes are on the table?` ? – mesqueeb Mar 17 '16 at 03:21
  • what's a right index? – mesqueeb Mar 17 '16 at 03:22
  • Whta is index: http://stackoverflow.com/questions/2955459/what-is-an-index-in-sql – Sanj Mar 17 '16 at 03:24
  • all the "like" conditions are in HAVING clause ? should they be under WHERE clause? – Sanj Mar 17 '16 at 03:33
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/106528/discussion-between-sanj-and-mesqueeb). – Sanj Mar 17 '16 at 03:34

1 Answers1

1

Some basics:

Database basics at the wire

Under the cover, virtually all current generation databases use highly optimized flavors of ISAM for their storage format. Data is stored and retrieved in blocks that correspond to physical structures on the hard drive, called pages.

A table is a collection of pages containing as many rows as they can.

An index is, physically, a table with a special contract: it contains only the columns you have chosen and a reference to the physical row in the table, and is maintained in an organized manner for quick searching. Since indexes are smaller and are ordered searches against an index take a fraction of the time required to scan the table. In a trivial case, searching a million large rows in a table for a specific value may take up to three hours, but will require no more than 40 milliseconds from a cold start against an index.

So how does SQL tie to indexes?

The optimizer sits between you and the underlying physical database structure. Based on your joins and where clause, it looks for the least cost patch between the various entities in the query.

Any what about this query?

The joins in your query are these:

LEFT JOIN wp_term_relationships ON wpp.ID = object_id
LEFT JOIN wp_terms ON term_taxonomy_id = wp_terms.term_id
LEFT JOIN wp_teleapo_supplier AS s ON wpp.post_author = s.ID

Based on these joins, and common practice, I would expect indexes on:

wp_posts.object_id
wp_term_relationships.ID
wp_term_relationships.term_taxonomy_id 
wp_terms.term_id
wp_teleapo_supplier.id

Primary Keys, Foreign Keys, and unique constraints are all index constructs. If you have created an index directly or incorporated these columns into any of these constructs, then you have indexes.

Your "where" clause is:

WHERE post_type = 'post' 

Unless 'post' is a rare type of posting, this will bring back just about every post in the system. Even if you have an index on this column, it won't necessarily buy you anything because its selectivity is so poor.

Finally, you applied a "Having" clause, which is applied to the result set of the preceding query.

       HAVING

    (post_title LIKE '%%'
    OR allcatname LIKE '%%'
    OR allslug LIKE '%%'
    OR supplier_company LIKE '%%')
    AND (allslug LIKE '%health-and-beauty%'
    ) AND (allslug LIKE '%%'
    ) AND 

    /* ADD EXTRA SEARCH TAGS: */
    /* Language tag */
     allslug LIKE '%english%' 
    /* ..... tag */

So, in your query, you have effectively selected every post and everything attached to it, then you are trying to run partial text pattern matches against these columns on every row.

Suggested solution:

It's time to drop back 10 and punt. This is a problem that requires a text search engine to get the right kind of selectivity, not a relational system.

MySQL has a textsearch engine, fully documented at https://dev.mysql.com/doc/refman/5.5/en/fulltext-search.html .

Set up your text search indexes, then invert the query so your highest selectivity happens in the join or the where clause, then apply the having against the few rows you actually need.

Alternatively, if you dataset is large enough, you may want to look at a more robust solution for serious text searches.

pojo-guy
  • 966
  • 1
  • 12
  • 39