1

I'm writing a test framework in which I need to capture a MySQL database state (table structure, contents etc.).

I need this to implement a check that the state was not changed after certain operations. (Autoincrement values may be allowed to change, but I think I'll be able to handle this.)

The dump should preferably be in a human-readable format (preferably an SQL code, like mysqldump does).

I wish to limit my test framework to use a MySQL connection only. To capture the state it should not call mysqldump or access filesystem (like copy *.frm files or do SELECT INTO a file, pipes are fine though).

As this would be test-only code, I'm not concerned by the performance. I do need reliable behavior though.

What is the best way to implement the functionality I need?

I guess I should base my code on some of the existing open-source backup tools... Which is the best one to look at?

Update: I'm not specifying the language I write this in (no, that's not PHP), as I don't think I would be able to reuse code as is — my case is rather special (for practical purposes, lets assume MySQL C API). Code would be run on Linux.

Alexander Gladysh
  • 39,865
  • 32
  • 103
  • 160

3 Answers3

2

Given your requirements, I think you are left with (pseudo-code + SQL)

tables = mysql_fetch "SHOW TABLES"
foreach table in tables
    create = mysql_fetch "SHOW CREATE TABLE table"
    print create
    rows = mysql_fetch "SELECT * FROM table"
    foreach row in rows
        // or could use VALUES (v1, v2, ...), (v1, v2, ...), .... syntax (maybe preferable for smaller tables)
        insert = "INSERT (fiedl1, field2, field2, etc) VALUES (value1, value2, value3, etc)"
        print insert

Basically, fetch the list of all tables, then walk each table and generate INSERT statements for each row by hand (most apis have a simple way to fetch the list of column names, otherwise you can fall back to calling DESC TABLE).

SHOW CREATE TABLE is done for you, but I'm fairly certain there's nothing analogous to do SHOW INSERT ROWS.

And of course, instead of printing the dump you could do whatever you want with it.

Nico
  • 790
  • 1
  • 8
  • 20
Rob Van Dam
  • 7,812
  • 3
  • 31
  • 34
  • 1
    You wouldn't miss any data from your tables because the `SHOW CREATE TABLE` includes all columns, indexes, constraints, last auto increment, etc. However, I didn't bother pulling down any meta data like views, users, grants, etc. If that information matters to you, there are `SHOW VIEWS`, `SHOW GRANTS` or by pulling down tables from the `mysql` database. – Rob Van Dam Jan 09 '10 at 17:15
1

If you don't want to use command line tools, in other words you want to do it completely within say php or whatever language you are using then why don't you iterate over the tables using SQL itself. for example to check the table structure one simple technique would be to capture a snapsot of the table structure with SHOW CREATE TABLE table_name, store the result and then later make the call again and compare the results.

Have you looked at the source code for mysqldump? I am sure most of what you want would be contained within that.

DC

DeveloperChris
  • 3,412
  • 2
  • 24
  • 39
  • Well, mysqldump is 5K+ lines of GPL-ed C code with a lot of special cases... http://bazaar.launchpad.net/~mysql/mysql-server/mysql-5.1/annotate/head:/client/mysqldump.c If I'm down to adapting it for my needs... I'd rather call it from command line then. (I'd still like to avoid this) – Alexander Gladysh Jan 08 '10 at 01:04
  • If you are open to command line utilities check out the excellent maatkit http://www.maatkit.org/tools.html – DeveloperChris Jan 08 '10 at 01:29
  • Thanks, but, if I'm down to using command line, I'll use mysqldump to reduce external dependencies. I'd still like to find SQL-query-only solution. – Alexander Gladysh Jan 08 '10 at 11:51
  • 1
    it would appear then you are back to writing your own queries. but maatkit has some very good functions designed to do similar to what you want, browsing the source should help its written in PERL so it should be relatively easy for anyone accustomed to c to read. in particular look at the mk-table-checksum for comparing tables. – DeveloperChris Jan 09 '10 at 01:20
0

Unless you build the export yourself, I don't think there is a simple solution to export and verify the data. If you do it table per table, LOAD DATA INFILE and SELECT ... INTO OUTFILE may be helpful.

I find it easier to rebuild the database for every test. At least, I can know the exact state of the data. Of course, it takes more time to run those tests, but it's a good incentive to abstract away the operations and write less tests that depend on the database.

An other alternative I use on some projects where the design does not allow such a good division, using InnoDB or some other transactional database engine works well. As long as you keep track of your transactions, or disable them during the test, you can simply start a transaction in setUp() and rollback in tearDown().

Louis-Philippe Huberdeau
  • 5,341
  • 1
  • 19
  • 22
  • Thanks, I do not want to use INFILE and OUTFILE, since I do not want to (explicitly) touch the filesystem from my framework code. – Alexander Gladysh Jan 08 '10 at 00:45
  • As for rebuilding the database — yes, I'm doing the same. The problem is that I have to test a custom "rollback" logic (not related to SQL transactions), where I'm "committing" several changes to DB and then "rolling back" them one-by-one, all in a single test. – Alexander Gladysh Jan 08 '10 at 00:48