0

We are wondering if we can run some command in such a way that our prod_addressdb can be cloned along with constraints, table schemas, and then all the data as well.

This would avoid any real data transfer and help us immensely in the short term. Is this even a possibility without doing a mysql dump that transfers it to my slow machine and then import which is way way slow from my machine.

and then later, a point to point like mysqlinstance.prod_addressb -> mysqlstaginginstance.staging_adddressdb would be super super nice as well.

Is there a way to do any of this?

We are in google cloud and the export dumps a file with "create database prod_addressdb" so when we try to import, it fails to go to the staging_addressdb location :(. The exported file is in cloud storage and I don't know of a way to automatically go through and find and replace all the prod_addressdb with staging_addressdb :(. Looking at this problem from many different angles to try to create a pre-production testing location of deploy prod to staging and upgrade and test.

thanks, Dean

Dean Hiller
  • 19,235
  • 25
  • 129
  • 212
  • Dump source database (by mysqldump, for example), including all objects but CREATE DATABASE statement. Create new database and restore the dump into it. – Akina Aug 05 '20 at 18:50
  • where can I run that command from @Akina in google cloud? I can't seem to pass that in through google api as ideally a dump goes to google cloud storage and then back to DB. We are doing this with google cloud build so not sure the limits of disk space on cloud build to do the dump ourselves not to mention what ips to whitelist...it gets real hairy which is why I was trying a direct command. – Dean Hiller Aug 05 '20 at 18:51
  • 1
    And for Google Cloud, you can just read [the documentation](https://cloud.google.com/sql/docs/mysql/clone-instance). – Akina Aug 05 '20 at 18:58
  • lol, me and my colleague have already battled the documentation quite a bit before posting this. cloning an instance is something we didn't think of (we were trying to clone the db in an instance) so we never even looked into that doc yet...interesting pivot for sure. – Dean Hiller Aug 05 '20 at 19:57
  • holy OMG @kmoser , I think that is an awesome option....I was worried about downloading the whole DB onto my google cloud build. I may be able to build on that to read the cloud storage backup, modify the db names and pipe is straight through that the cloud build would see all the traffic but never use any disk. This may be exactly what I was searching for to point me in the right direction. I would have marked this as correct just because it's 'close enough' to get me going again!!! – Dean Hiller Aug 06 '20 at 01:36

1 Answers1

1

There is no single SQL statement to clone multiple tables.

You can use CREATE TABLE <newschema>.<newtable> LIKE <oldschema>.<oldtable> but this doesn't include foreign keys.

You can use SHOW CREATE TABLE <oldtable> and then execute it in the new schema. That includes foreign keys.

You will find it's easier to load data into the new tables before you create foreign keys. Then run ALTER TABLE to add the foreign keys after you're done copying data.

Then you can copy data from an old table to a new table, one table at a time:

INSERT INTO <newschema>.<newtable> SELECT * FROM <oldschema>.<oldtable>;

Note this locks the data you're reading from the old table while it runs the copy. I'm not sure of the size of your data or how long this would take on your cloud instance. But it does achieve the goal of avoiding data transfer to the client and back to the server.

Bill Karwin
  • 538,548
  • 86
  • 673
  • 828
  • Actually, @kmoser pointed me to an amazing post where I can pipe data through my cloud build instance which is quite amazing. It might just work. – Dean Hiller Aug 06 '20 at 01:36
  • and longer term, would work too when I have a separate staging instance (kinda pinching pennies at this startup right now). – Dean Hiller Aug 06 '20 at 01:37