0

I have to dump mysql databases and transfer it to other server (im using mysqldump and rsync). Now I doing it step by step:

  1. dump database A (~25 minutes)
  2. transfer database A (~30 minutes )
  3. dump database B (~35 minutes)
  4. transfer database B (~45 minutes)
  5. ... So for 7 database it takes something about 10 hours
bazy=("ast" "biog" "dbaut" "tran2" "dest" "recor" "senti")
place="/backup"


function dump { /usr/bin/mysqldump  -u $user -p$password $baza  | gzip -$cl -c > $place/$baza.sql.gz; }

function transfer { rsync -Pav -e "ssh -i /usr/src/migration/ky" $place/$baza.sql.gz ro***@$dst:/backup/; }

function remove { rm $place/$baza.sql.gz -f; }

for baza in ${bazy[@]}; do
        echo $baza
        dump
        transfer
        remove
done

Is there any way to do this paralelly? - while database A is transfered in the same time dumping database B, etc.

yarnsik
  • 35
  • 5
  • Have you tried any of the (many, many) Q&A entries we already have on the topic? Can you ask about a narrow, specific problem you encountered trying to do so? – Charles Duffy Sep 07 '22 at 22:21
  • ...btw, you're missing a lot of quotes; it's not related to your question, but running your code through http://shellcheck.net/ and fixing what it finds would do you good. – Charles Duffy Sep 07 '22 at 22:22
  • ...to summarize the aforementioned duplicates (albeit trying to do all three at once without any kind of rate limiting or locking): `for baza in "${bazy[@]}"; do ( dump; transfer; remove ) & done` – Charles Duffy Sep 07 '22 at 22:23
  • ...if you want to only allow one dump at a time / one transfer at a time / one remove at a time, that's a case for advisory locking, and we have lots of Q&A about how to do that too. If you want to limit the number of parallel processes, that's a use case for `xargs -P`, and that too we have a lot of existing Q&A entries about. You can mix-and-match those: using `xargs -P` to limit the total number of databases being processed at a time _and also_ using `flock` to limit the number of databases being dumped at a time to 1, f/e. – Charles Duffy Sep 07 '22 at 22:24
  • BTW, the _easiest_ way to do this is to make a script that takes a single database name as an argument; then you can give xargs your list of databases on input and tell it to run as many copies of that script at a time as you want databases being dumped/transferred/removed in parallel. – Charles Duffy Sep 07 '22 at 22:30
  • (btw, consider some grumbling to have happened about the choice to use the 1980s ksh `function name {` syntax instead of the modern (standardized in the 1990s) POSIX `name() {`). – Charles Duffy Sep 07 '22 at 22:32

0 Answers0