I have a simple bash script to delete a list of files in s3 that looks like this:
#!/usr/bin/bash
filename="$1"
s3folder="$2"
while read -r line; do
file="$line"
aws s3 rm s3://bucketname/data/$s3folder/$file
done < "$filename"
When I run this, I see the stdout of the s3 delete command (Example: delete: s3://s3bucketname/data/foldername/file.json) so I know my s3 command is good and I know the data from my variables and input file are landing properly, there are no errors, but my files are still there after the script has finished.
If I run the individual aws s3 rm commands manually on the command line, it works.
I've tried every variant I could find of sourcing the bash profile and running the command as the user with s3 privs etc.
I've also tried the --recursive --exclude --include variants of the aws s3 rm command but that has the same results.
`for file in $(cat $filename);do aws s3 rm s3://s3bucketname/data/$s3folder/ --recursive --exclude "*" --include "$file"; done`
If I run the aws s3 rm command in a script using a single file name as a command line arg, that also works so I guess it is something about the loop but I'm not getting it ?