3

We have a system where we have for each division a database , currently we have 20+ divisions.

So when we have to update / delete / alter / new table we have to go threw all of those database and run the queries .

Sometime people don't follow procedures (always ?) and we end up having structures that aren't updated .

I was looking into a way to lunch the same queries on all database without having to use bash or external scripts .

So here is some of the stuff that i found :

CALL FOR EACH("SELECT databases WHERE `DATABASE` LIKE 'division_%'" , ${1});

where i could enter a query in the ${1}

or this (less dynamic):

call $('{a, b}'   , 'ALTER TABLE division_${1}.caching ADD COLUMN notes VARCHAR(4096) CHARSET utf8'');

But this gives me "No database Selected"

Any idea on how to proceed with this situation ?

Tarek
  • 3,810
  • 3
  • 36
  • 62
  • Not a useful response here, but really this should be the same database if they are part of the same system. Slightly more useful response, if you look into prepared statements, they may be able to help you ( http://dev.mysql.com/doc/refman/5.6/en/sql-syntax-prepared-statements.html ) – Simon at The Access Group Jun 26 '14 at 17:12
  • @Simonatmso.net I agree ... We got this beautiful project from another company ... Anyway it has pros and cons (a lot of entries 650k+ / table) .. Will post a solution soon that i found . – Tarek Jun 26 '14 at 19:29

2 Answers2

0

This is a solution i found and it works :

USE division_global;

DELIMITER $$

CREATE PROCEDURE `MultipleSchemaQuery`()
BEGIN
    declare scName varchar(250);
    declare q varchar(2000);

    DROP TABLE IF EXISTS ResultSet;
    create temporary table ResultSet (
     option_value varchar(200)
    );

    DROP TABLE IF EXISTS MySchemaNames;
    create temporary table MySchemaNames (
        schemaName varchar(250)
    );

    insert into MySchemaNames
    SELECT distinct
        TABLE_SCHEMA as SchemaName
    FROM 
        `information_schema`.`TABLES`  
    where 
         TABLE_SCHEMA LIKE 'division_%';

label1:
    LOOP
        set scName = (select schemaName from MySchemaNames limit 1);
        // The Query
        set @q = concat('TRUNCATE TABLE ', scName, '.caching');
        PREPARE stmt1 FROM @q;
        EXECUTE stmt1;
        DEALLOCATE PREPARE stmt1;

        delete from MySchemaNames where schemaName = scName;
        IF ((select count(*) from MySchemaNames) > 0) THEN
            ITERATE label1;
        END IF;
        LEAVE label1;

    END LOOP label1;

    SELECT * FROM ResultSet;

    DROP TABLE IF EXISTS MySchemaNames;
    DROP TABLE IF EXISTS ResultSet;
END
$$

Inspired by this :

Querying multiple databases at once

Community
  • 1
  • 1
Tarek
  • 3,810
  • 3
  • 36
  • 62
0

You will need to use a stored procedure and some prepared statements as Simon pointed out in the comments:

cat procedure.sql
DELIMITER $$

DROP PROCEDURE IF EXISTS alter_all $$

CREATE PROCEDURE alter_all()
BEGIN
  DECLARE done INT DEFAULT 0;
  DECLARE _schema VARCHAR(30);
  DECLARE cur CURSOR FOR select SCHEMA_NAME from information_schema.SCHEMATA where SCHEMA_NAME like 'division_%';
  DECLARE CONTINUE HANDLER FOR NOT FOUND SET done := 1;

  OPEN cur;

  alterLoop: LOOP
    FETCH cur into _schema;
    if done = 1 THEN
      LEAVE alterLoop;
    END IF;
    SET @mystmt = concat('ALTER TABLE ', _schema, '.caching ADD COLUMN notes VARCHAR(4096)');
    PREPARE stmt3 FROM @mystmt;
    EXECUTE stmt3;
    DEALLOCATE PREPARE stmt3;
  END LOOP alterLoop;

  CLOSE cur;

END $$

DELIMITER ;

With that, let's try it (using Server version: 5.5.35-0ubuntu0.12.04.2 (Ubuntu)):

> create schema division_1 default character set 'UTF8';
> create table division_1.caching (id int not null auto_increment primary key, value varchar(10));
> create schema division_2 default character set 'UTF8';
> create table division_2.caching (id int not null auto_increment primary key, value varchar(10));
> use division_1;
> source procedure.sql
> CALL alter_all();
Query OK, 0 rows affected, 1 warning (0.05 sec)
> desc caching;
+-------+---------------+------+-----+---------+----------------+
| Field | Type          | Null | Key | Default | Extra          |
+-------+---------------+------+-----+---------+----------------+
| id    | int(11)       | NO   | PRI | NULL    | auto_increment |
| value | varchar(10)   | YES  |     | NULL    |                |
| notes | varchar(4096) | YES  |     | NULL    |                |
+-------+---------------+------+-----+---------+----------------+
> desc division_2.caching
+-------+---------------+------+-----+---------+----------------+
| Field | Type          | Null | Key | Default | Extra          |
+-------+---------------+------+-----+---------+----------------+
| id    | int(11)       | NO   | PRI | NULL    | auto_increment |
| value | varchar(10)   | YES  |     | NULL    |                |
| notes | varchar(4096) | YES  |     | NULL    |                |
+-------+---------------+------+-----+---------+----------------+
Pablo Carranza
  • 409
  • 3
  • 5
  • It looks like the one i added but what i wanted is more for multiple queries (sometime we have over 100 queries to run) – Tarek Jul 05 '14 at 15:59
  • Then you could pass the queries to build as the proc parameters or store the queries in a table and load from there instead of doing concat for one only case. – Pablo Carranza Jul 05 '14 at 18:04