3
ALTER TABLE customers ADD split INT(1);

10 mil. records... I performed this command for 1 hour and still loading ..is there any way to make it finish faster ?

Michael
  • 6,377
  • 14
  • 59
  • 91
  • What load do you have currently on this server? – wlk Mar 14 '11 at 18:00
  • Also, how many indexes on the table... If you have many, it has to rebuild all the index pages too... 10 million is nothing to sneeze at to first alter the table, then reindex impact. – DRapp Mar 14 '11 at 18:17
  • Which storage engine are you using? MyISAM? Innodb? There are some settings you can change to make this faster, but they depend on the storage engine. – Ike Walker Mar 14 '11 at 18:41
  • did you manage to beat 6 mins ? – Jon Black Mar 17 '11 at 14:17
  • 1
    Consider tuning innodb_log_file_size (But be careful, see http://www.mysqlperformanceblog.com/2011/07/09/how-to-change-innodb_log_file_size-safely/) and innodb_log_buffer_size. For more information, see my answer here: http://stackoverflow.com/a/12688184/1148030 – Peter Lamberg Oct 02 '12 at 10:07

2 Answers2

7

The following is pretty quick, takes a little over 6 minutes with 10 million rows but the example table has fewer fields and indexes than your production table so expect it to take a little longer in your case if you decide to use it !

Note: the example was done on windows OS so you'll have to change pathnames and \r\n to \n to conform to linux standards !

Here's my existing table (InnoDB engine):

drop table if exists customers;
create table customers
(
customer_id int unsigned not null auto_increment primary key,
name varchar(255) not null,
country_id tinyint unsigned not null default 0,
key (country_id)
)
engine=innodb;

mysql> select count(*) from customers;
+----------+
| count(*) |
+----------+
| 10000000 |
+----------+
1 row in set (1.78 sec)

Create a new version of the table which includes the new field you require:

drop table if exists customers_new;
create table customers_new
(
customer_id int unsigned not null auto_increment primary key,
name varchar(255) not null,
country_id tinyint unsigned not null default 0,
split tinyint not null default 0,
key (country_id)
)
engine=innodb;

Find the location of your Uploads folder:

select @@secure_file_priv;

Export the data in PK order from the old customer table into csv format:

select * into outfile 'C:\\ProgramData\\MySQL\\MySQL Server 8.0\\Uploads\\customers.dat'
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
from customers order by customer_id;

Query OK, 10000000 rows affected (17.39 sec)

Load the customer.dat file into the new customer table:

truncate table customers_new;

set autocommit = 0;

load data infile 'C:\\ProgramData\\MySQL\\MySQL Server 8.0\\Uploads\\customers.dat'
into table customers_new
fields terminated by '|' optionally enclosed by '"'
lines terminated by '\r\n'
(
customer_id,
name,
country_id,
@dummy -- represents the new split field
)
set
name = nullif(name,'');

commit;

Query OK, 10000000 rows affected (6 min 0.14 sec)

Confirm that the new table looks okay:

select * from customers_new order by customer_id desc limit 1;
+-------------+-------------------+------------+-------+
| customer_id | name              | country_id | split |
+-------------+-------------------+------------+-------+
|    10000000 | customer 10000000 |        218 |     0 |
+-------------+-------------------+------------+-------+
1 row in set (0.00 sec)

insert into customers_new (name, country_id, split) values ('f00',1,1);
Query OK, 1 row affected (0.07 sec)

select * from customers_new order by customer_id desc limit 1;
+-------------+------+------------+-------+
| customer_id | name | country_id | split |
+-------------+------+------------+-------+
|    10000001 | f00  |          1 |     1 |
+-------------+------+------------+-------+
1 row in set (0.00 sec)

Drop the old table and rename new one:

drop table customers;
Query OK, 0 rows affected (0.18 sec)

rename table customers_new to customers;
Query OK, 0 rows affected (0.05 sec)

select * from customers order by customer_id desc limit 1;
+-------------+------+------------+-------+
| customer_id | name | country_id | split |
+-------------+------+------------+-------+
|    10000001 | f00  |          1 |     1 |
+-------------+------+------------+-------+
1 row in set (0.00 sec)

That's all folks !

Pikamander2
  • 7,332
  • 3
  • 48
  • 69
Jon Black
  • 16,223
  • 5
  • 43
  • 42
1

Probably not by code..
Close all other apps, make sure nothing else is taking up your cpu usage, no malware? get a faster computer?
It would help if you could tell us the environment setup you're working with and such. It could be a number of things involving your network, your server, etc, as well.

Nick Rolando
  • 25,879
  • 13
  • 79
  • 119
  • 1
    Something tells me if it's 10 million rows, it's either a high volume server with lots of traffic, or part of a data server of a really cheap company with nothing else going on. This looks more like a case "I need more power, Scotty!". – Mike S Mar 14 '11 at 18:08