5

I wrote a bash script for uploading backup files from a server to a ftp server. But I always get an error.

Name (myftpserver:root): Permission denied.
Login failed.
Login with USER first.
Please login with USER and PASS.
Local directory now /backup01
Please login with USER and PASS.
Passive mode refused.

That's my script:

#!/bin/bash

DATE=`date +%Y-%m-%d_%H%M`
LOCAL_BACKUP_DIR="/backup01"
DB_NAME="databasename"
DB_USER="root"

FTP_SERVER="randomIP"
FTP_USERNAME="myname"
FTP_PASSWORT="supersecret"
FTP_UPLOAD_DIR="/home/mydirectory/ftp/upload"

LOG_FILE=/backup01/backup-$DATE.log

mysqldump -u $DB_USER  $DB_NAME | gzip  > $LOCAL_BACKUP_DIR/$DATE-$DB_NAME.sql.gz

ftp $FTP_SERVER << END_FTP
quote USER $FTP_USERNAME 
quote PASS $FTP_PASSWORD
cd $FTP_UPLOAD_DIR
lcd $LOCAL_BACKUP_DIR
put "$DATE-$DB_NAME.sql.gz"
bye
END_FTP


if test $? = 0
then
    echo "Database successfully uploaded to the FTP Server!"
    echo "Database successfully created and uploaded to the FTP Server!" | mail -s "Backup from $DATE" my.email@whereever.com

else
    echo "Error in database upload to Ftp Server" > $LOG_FILE
fi

Maybe someone can help me, because I've tried everything I've found on the internet. I've made a .netrc file. I configured the vsftpd.conf, enabled passiv mode, enabled user list and I've made a lot of other stuff... But now I'm having no idea what else I have to do to make this script working the way it should. And I have no idea why it's trying to connect via root...

Maybe there is someone out there who can help. Thanks in advance.

steff
  • 53
  • 1
  • 4
  • Your FTP client probably doesn't like the double quotes around the file name. – tripleee Nov 20 '19 at 14:49
  • 1
    You are seeing `myftpserver:root` root here because the script is running under `root` when it uses ftp to connect to the remote ftp server. This is different that supplying the user name during a ftp connection. You can see this by using ftp manually – GoinOff Nov 20 '19 at 15:05
  • 1
    using ftp -n must work. like the example below – Incrivel Monstro Verde Nov 20 '19 at 15:15
  • 1
    "here-doc" ftp scripts are staple, but have no error handling. Don't use them for anything that matters. A `test` for 0 only confirms `ftp` exited successfully, which it will happily do after reporting in its output that everything failed. I have written shell-based loops that read and respond according to a script, but it's nontrivial. If you can use `scp`, try that. If not, pull the file you just sent back to a local tempname and compare sent to pulled. If they match, you pulled the file you sent, and you can delete the temp and move on. – Paul Hodges Nov 20 '19 at 19:47
  • 1
    @PaulHodges ```scp``` was a good idea. It works perfectly fine. Thanks. – steff Nov 21 '19 at 08:48
  • Glad to help. How about I make that an actual answer? – Paul Hodges Nov 21 '19 at 14:00
  • you call system ftp command, putting all parameters by STDIN. It won't be working. Rather put command as ftp $(command_creating_ftp_URI_with_user_and_password). Other command you do as you described, using STDIN stream. But there is some problem, you can list ps -auxww all running command with called arguments. Then password will be visible. Check man page, for finding store credentials into external file. It is possible. Alternatively do some script with python, there are a lot of examples on the Internet. See https://stackoverflow.com/questions/28749534/connecting-ftp-server-using-python – Znik Jul 14 '23 at 07:56

2 Answers2

7

I use:

ftp -v -n >> /tmp/ftpb.log <<EOF
        open $URL
        user $USER $PASS
        binary
        put $FILE
        quit
EOF

and works

0

It's a common staple to use something like this:

$: ftp -vn <<!
> open localhost
> user foo
> put someFile
> quit
> !
> ftp: connect :Connection refused
Not connected.
Not connected.
Not connected.
$: echo $?
0

Unfortunately ftp considers that it successfully reported all problems, so it exits with a happy zero.

Use scp:

if scp "$lcldir/$filename" "$usr/$pw@$svr:$dir/"
then echo "file delivered"
else echo "delivery failed"
fi

If you can't use scp try something like expect, or write something in Perl - some way you can interactively test and confirm each step.

As a last resort, make the here-doc send the file and then pull it back to a tempfile that doesn't already exist locally. If you can successfully cmp file1 file2 afterwards, the send must have worked ok.

Paul Hodges
  • 13,382
  • 1
  • 17
  • 36
  • at the end you should grep/sed , and then make decision, you found succesfull string, or not. Notice, inside sh and bash, $? is exit status of the last command in the pipeline. Based on this example, you can filter out |grep '^Not connected'. If it will be found, you exit code will be 0. It means, transfer is failed. – Znik Jul 14 '23 at 08:00
  • Sometimes that's all you need, so that's a great point. Sometimes it isn't, as I have addressed [here](https://stackoverflow.com/a/50493882/8656552) and [here](https://stackoverflow.com/a/66476731/8656552) and [here](https://stackoverflow.com/a/52953316/8656552) and a few other places. YMMV. Figure out what your needs are, test them well, and go for the simplest, most stable option. For me, that's almost always `scp`, especially if you can use trusted connections without having to worry about passwords. Even then, I'd probably prefer `expect` to generic `ftp`, but to each their own. – Paul Hodges Jul 14 '23 at 16:02
  • 1
    consider using curl command for ftp or other protocol file transfer – Znik Jul 28 '23 at 07:51