i have got a very large table and i preform a very simple queries on it like id look ups. the query got slower with time because of the increasing of the table rows. i perform around 300 queries in second and it makes my script runs slow and my memory to 99% (the memory is lower then the size of the DB) i want to partition and subpartition the table for maximum performance, here is my table (out of 30 tables like it), please add a code because i am very new to partitioning and don't know much. the select queries are only where clause for id look ups and simple insert ones. i want to upgrade the RAM to be higher then the DB size but i want to avoid it and i don't know if it will solve the problem better then partitioning.
CREATE TABLE `books` (
`id` INT(10) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(200) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
`picture` VARCHAR(500) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
`url` VARCHAR(500) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
`created` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
INDEX `url` (`url`(333)),
INDEX `name` (`name`)
)
COLLATE='utf8_unicode_ci'
ENGINE=MyISAM
ROW_FORMAT=DEFAULT
AUTO_INCREMENT=937
here is some example of my queries:
SELECT id FROM books WHERE url = 'blabla';
INSERT INTO user_books SET book_id = '3', user_id = '10';
each query takes around 0.05-0.2 seconds
i got around 5-10 million rows in each table.
DB size is 10GB i thought about upgrading the RAM to 16GB