How to handle over 10 million records in MySQL only read operations.
I have two table one is company
which holds records of company i.e its name and the services provided by it, thus 2 column and has about 3 million records and another table employee
which has about 40 columns and about 10 million records. When trying to fetch data even simple queries such as
Employee::paginate(100);
//In terms of mysql it
//select count(*) as aggregate from 'employee';
//Select * from 'employee' limit 100 offset 0;
used to take about 30s and now it takes like forever.
If I want to do a search, apply a filter or wants to join two table i.e company
and employee
then sometimes it works and sometimes it crashes and gives lots of errors/warning in the SQL server logs.
Can anyone please tell me how can I handle this volume of records more efficiently without causing SQL server meltdown especially not during high traffic time.
Currently, I have only primary keys i.e ids and joint ids are indexed.
This is kind of duplicate post compare to all similar queries has been made on SO, but those did not helped me much.
Questions i have followed on SO
- Best data store for billions of rows
- Handling very large data with mysql
- https://dba.stackexchange.com/questions/20335/can-mysql-reasonably-perform-queries-on-billions-of-rows
- Is InnoDB (MySQL 5.5.8) the right choice for multi-billion rows?
- How big can a MySQL database get before performance starts to degrade
- Handling millions of records in arrays from MySQL in PHP?