4

How do I set up CORS on Amazon S3 to allow only approved domains to access a JS script in my S3 bucket? Currently, I have CORS set as follows on my S3 bucket:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>www.someapproveddomain.com</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Yet any domain can access and run script hello.js (which resides in the S3 bucket) as demonstrated at JSFiddle. Does anyone what I'm doing wrong? Or, maybe I'm just misunderstanding what CORS is supposed to do?

Bob Arlof
  • 940
  • 10
  • 19
  • 1
    Change the CORS www.someapproveddomain.com to http://www.someapproveddomain.com and see if it is taking effect – Piyush Patil Jul 20 '16 at 17:27
  • CORS allows scripts run in other origins to *read* a resource on your origin. Executing a script does not require that a cross-origin script be able to read the script. (Compare to images: you can display a cross-origin `` on your page without having a script read the contents of the image.) – apsillers Jul 20 '16 at 17:47
  • "CORS allows scripts run in other origins to read a resource on your origin". Hmm... According to MDN (https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy): "Cross-origin reads are typically not allowed". – Bob Arlof Jul 20 '16 at 20:08

2 Answers2

6

Instead of trying to solve this with CORS, you need to solve it using an S3 Bucket Policy that restricts access to a specific HTTP referrer. This is documented here.

This is the example bucket policy given by AWS:

{
  "Version":"2012-10-17",
  "Id":"http referer policy example",
  "Statement":[
    {
      "Sid":"Allow get requests originating from www.example.com and example.com.",
      "Effect":"Allow",
      "Principal":"*",
      "Action":"s3:GetObject",
      "Resource":"arn:aws:s3:::examplebucket/*",
      "Condition":{
        "StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
      }
    }
  ]
}

Note that the HTTP referrer is fairly easy for an attacker to spoof.

Mark B
  • 183,023
  • 24
  • 297
  • 295
  • Thanks, Mark. Marking the script as private and then using a bucket policy to provide access does exactly what I want. I didn't expect a bulletproof solution since I doubt that's possible without authentication, but I do want deter casual access to the scripts from unapproved domains. I'm still puzzled about why CORS didn't achieve the same thing. – Bob Arlof Jul 20 '16 at 19:56
  • This is a cool feature but it is not very secure because a hacker can easily spoof the referrer. See: http://stackoverflow.com/questions/21807604/preventing-curl-referrer-spoofing – cosbor11 Apr 12 '17 at 17:54
3

I do think you're misunderstanding CORS. (Edit: which is okay!)

CORS isn't intended for server-side security.

There are much better explanations here on SO than what I can explain, but one of the main purposes of this is to prevent someone from creating a fake site (like a banking portal) and having it make requests to the real server with the user's real credentials. If the server and real site have CORS enabled, the server will only allow requests from the real site's domain.

What you probably want to look into is using IAM policies or S3 bucket policies. There's an excellent blog post from AWS here that goes over this. You'll need to have an EC2 instance as a sort of middle-man to access the S3 bucket and server the files to a client.

There may be another way to do this with purely S3, but essentially if its accessible publicly without authentication, someone can technically spoof the headers to make it appear like its coming from your CORS defined domain.

Lucas Watson
  • 763
  • 1
  • 8
  • 26
  • Wouldn't doing this via an IAM policy restrict access to only users that are authenticated via IAM within the OP's AWS account? I don't think that's what he is trying to accomplish here. – Mark B Jul 20 '16 at 17:43
  • Yes... oops, fixed. – Lucas Watson Jul 20 '16 at 17:45
  • "You'll need to have an EC2 instance as a sort of middle-man to access the S3 bucket and server the files to a client" Why? How does that add any protection (and how do you justify the added cost and latency) over a S3 bucket policy solution? And why say that right after linking to an article which doesn't say that at all? – Mark B Jul 20 '16 at 17:47
  • If OP wants to keep the script hidden from public, putting it behind a simple server would be the easiest solution. – Lucas Watson Jul 20 '16 at 17:50
  • I wouldn't consider setting up a new EC2 server, securing the server, configuring Nginx, and paying every month for that server, to be the "easiest" solution. Quite the contrary, I find copy/pasting an S3 bucket policy and modifying a few lines to be much, much easier. – Mark B Jul 20 '16 at 17:55
  • Certainly, using a Bucket Policy is much better, but it doesn't keep the script private. If privacy is what OP wants, then EC2 (or ELB) would be the way to go. – Lucas Watson Jul 20 '16 at 18:15
  • 1. To restrict by "approved domains" as the OP is asking, something like Nginx would check the HTTP referrer, just like S3 is going to do. So it wouldn't be any more secure, just way more complicated and expensive. 2. Not sure why you mentioned an Elastic Load Balancer, because it would have no bearing on this issue at all. – Mark B Jul 20 '16 at 18:17
  • 1. OP also mentioned "any domain" was able to access his script, which is why I explained both S3 bucket policies and EC2, and also added authentication would be needed to make it private. 2. EB* If you'd really like to continue this please start a chat, but I think we've beat a dead horse. – Lucas Watson Jul 20 '16 at 18:47