3

Answer did not help


Resource policy for s3 bucket bucket1 is:

{
    "Version": "2012-10-17",
    "Statement": [{
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::bucket1/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        },
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::bucket1/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::bucket1/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}

IAM policy for bucket1 is:

   {
        "Action": [
            "s3:GetObject"
        ],
        "Resource": [
            "arn:aws:s3:::bucket1",
            "arn:aws:s3:::bucket1/*"
        ],
        "Effect": "Allow"       
   }

s3Upload() works fine

Error occurs after performing aws s3 cp s3://url . while copying file to local folder

This is conflict between IAM policy & resource policy for s3.


How to make resource policy allow to perform aws s3 cp?

overexchange
  • 15,768
  • 30
  • 152
  • 347

1 Answers1

6

There are few issues here. First, your bucket policy document is not a valid json but I guess that error happened during coping.

aws s3 cp s3://url doesn't work simply because bucket policy blocks it which is intended behavior in this case. Note that explicit deny always wins. Your bucket policy denies any upload if server side encryption header is missing in HTTP request. No matter how you define your IAM policy attached to a user, that user will not be able use the mentioned command as is due to the explicit deny.

If you want to make it work, you just need to specify server side encryption in your CLI command by using appropriate flag --sse AES256 (this is true when uploading objects to s3 bucket).

aws s3 cp s3://url --sse AES256

Other things that I have noticed:

In this part

"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket1/*”,
"Condition": {
    "Bool": {
        "aws:SecureTransport": "false"
    }
}

you are denying all s3 actions if the request is not using HTTPS but you have specified only objects in that bucket - "Resource": "arn:aws:s3:::bucket1/*” not the bucket itself - "Resource": "arn:aws:s3:::bucket1”, thus your statement applies only to object level operations. Is this intended behavior? If you want to deny all the actions for both object level operations and bucket level operations that are not using HTTPS then you need to change you current Resource to

"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
    "arn:aws:s3:::bucket1”,
    "arn:aws:s3:::bucket1/*”
],
"Condition": {
    "Bool": {
        "aws:SecureTransport": "false"
    }
}

And in this section

  {
        "Action": [
            "s3:GetObject"
        ],
        "Resource": [
            "arn:aws:s3:::bucket1",
            "arn:aws:s3:::bucket1/*"
        ],
        "Effect": "Allow"       
   }

this line in your Resource - "arn:aws:s3:::bucket1" is completely redundant because "s3:GetObject" action is object level operation and your statement doesn't contain any bucket level operations. You can freely remove it. So it should look something like this

   {
        "Action": [
            "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::bucket1/*",
        "Effect": "Allow"       
   }

UPDATE

When getting object, be sure that you specify some object, not just url of the bucket.

This will work

aws s3 cp s3://bucket/file.txt .

This will fail with 403 error

aws s3 cp s3://bucket .

If you want to download multiple files at the same time using the above command, you will need to do two things. First, you will need to update your IAM permissions to include s3:ListBucket on the bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::bucket/*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::bucket"
        }
    ]
}

Second thing, you will need to specify --recursive flag in cp command.

aws s3 cp s3://bucket . --recursive
Matus Dubrava
  • 13,637
  • 2
  • 38
  • 54
  • `aws s3 cp s3://url ./ --sse AES256` is a read operation that fails to copy file to local folder. For reading file, why would you need `--sse` option? – overexchange Jul 31 '19 at 18:31
  • `Resources` syntax is failing – overexchange Jul 31 '19 at 18:54
  • I am sorry, I have read `s3Upload()` so I was thinking that we are talking about uploading. Your bucket policy doesn't block any `GetObject` that is going through HTTPS. There is not need to specify `--sse` for `GetObject` and your IAM policy is sufficient to use `GetObject`. There are few way why this can fail. First, check whether you have attached those permissions to the right user. Then, check whether the arn of the bucket is correct, test whether the command still fails when you change current arn with `*`. Then check for any permission boundaries that could restrict the user. – Matus Dubrava Jul 31 '19 at 19:40
  • `Invalid template path packaged-template.yml` in after running `aws --debug s3 cp s3://bucket/app/branch/build/cfntemplate/packaged.yml .` – overexchange Jul 31 '19 at 20:40
  • This error is not related to permissions anymore. Are you sure that the object exists? Also, seeing `Invalid template path` message when performing `s3 cp` call is rather strange. – Matus Dubrava Jul 31 '19 at 20:49