1

I'm working in a template animation system, so I have different folders in S3 with different files inside (html, imgs, etc.)

What I do is:

  1. I change the folder policy like that:

    function changeFolderPolicy($folderPath, $client=null, $public) {
        if (!$client) {
            $client = getClientS3();
        }
        $effect = 'Allow';
        if (!$public) {
            $effect = 'Deny';
        }
    
        $policy = json_encode(array(
            'Statement' => array(
                array(
                    'Sid' => 'AllowPublicRead',
                    'Action' => array(
                        's3:GetObject'
                    ),
                    'Effect' => $effect,
                    'Resource' => array(
                        "arn:aws:s3:::".__bucketS3__."/".$folderPath."*"
                    ),
                    'Principal' => array(
                        'AWS' => array(
                            "*"
                        )
                    )
                )
            )
        ));
    
        $client->putBucketPolicy(array(
            'Bucket' => __bucketS3__,
            'Policy' => $policy
        ));
     }
    
  2. After changing the policy, the frontend gets all the necessary files.

However, sometimes, some files aren't loaded because of a forbidden 403. It's not always the same files, sometimes ara all loaded, sometimes none... I don't have a clue since putBucketPolicy is a synchronous call.

Thank you very much.

Enric A.
  • 893
  • 10
  • 18
  • 1
    Policies are not to be used dynamically to control access. If you wish to conditionally control access per request, I suggest denying access to the Prefix/Bucket you wish to control and serve Signed URLs to allow access. https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/s3-presigned-url.html – John Jul 11 '16 at 20:26
  • Thank you for finding that, @John. – Michael - sqlbot Jul 12 '16 at 11:39

1 Answers1

1

First, putBucketPolicy is not exactly synchronous. The validatation of the policy is synchronous but the application of the policy requires a nonspecific amount of time to replicate through the infrastructure.

There is no mechanism exposed for determining whether the policy has propagated.

Second, you're bucket policies in a way that fundamentally makes no sense.

Of course, this setup makes the implicit assumption that only one copy of this code would ever run at the same time, which is usually an unsafe assumption, even if it seems true right now.

But worse... toggling a prefix publicly readable so you can copy those files, then (presumably) putting it back when you're done - - instead of using the service correctly, by using the credentials to sign requests to download individual objects you need - - frankly, if I am correctly understanding what you're doing, here, I am at a loss for words to describe just how wrong this solution is.

This seems comparable to a bank manager securing the bank vault with a bicycle lock instead of using the vault's hardened, high-security, built-in access-control mechanisms because a bicycle lock "is easier to open."

Community
  • 1
  • 1
Michael - sqlbot
  • 169,571
  • 25
  • 353
  • 427
  • Thank you for your answer Michael. How should I then get a pre-signed URL for every single file inside a path? Please notice every different template (folder in S3) may have different number of files with different names. – Enric A. Jul 12 '16 at 07:32
  • Iterate through the files list and generate pre-signed URLs. You already have credentials (or you wouldn't be able to change the bucket policy as you are doing now) and that's all you need. There's a common misconception that pre-signed URLs are "from" S3 but they're generated locally, so this shouldn't slow you down. Presumably the same `$client` can [`createPresignedRequest`](https://docs.aws.amazon.com/aws-sdk-php/v3/guide/service/s3-presigned-url.html). Or, aws-cli has `aws s3 cp` iirc that can copy from S3 to local, with credentials. – Michael - sqlbot Jul 12 '16 at 11:38