I have a distributed PHP5 application that allows users to search and paginate through MySQL data. Sometimes, their queries can contain hundreds of thousands of records and these are always sorted by timestamp, so the volume of records and the order-by are hits on performance as-is.
I need to make pagination more efficient. Long ago I ditched SQL_CALC_FOUND_ROWS
in favor of a COUNT(*)
but this means the query still runs twice, once with the limit and once without.
I could combine the two queries and rely on PHP to paginate the result set instead of using a LIMIT
clause. Update: This isn't possible because the large dataset I'm trying to work with would exceed the memory limits set for php. Since this is distributed I can't rely on everyone to change their settings.
However, I'd really like to cache the results and paginate that way - I'd be much faster than having to query MySQL again. Because it's a distibuted application I won't have any caching system I'd normally use like Memcached, etc. Session storage would likely be too small.
Are there any alternatives? Is there a better way to paginate the results or a safe way to cache the results and let PHP paginate them?