4

I'm trying to setup a static website into S3 with a custom domain and using CloudFront to handle HTTPS.

The thing is that the root path works properly but not the child paths.

Apparently, it's all about the default root object which I have configured as index.html in both places.

  • example.com -> example.com/index.html - Works fine
  • example.com/about/ -> example.com/about/index.html - Fails with a NoSuchKey error

The funny thing is that if I open read access to S3 bucket and I use the S3 URL it works completely fine.

There is an AWS documentation page where they talk about that: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html, but they don't even say a solution, or at least I haven't been able to find it.

However, if you define a default root object, an end-user request for a subdirectory of your distribution does not return the default root object. For example, suppose index.html is your default root object and that CloudFront receives an end-user request for the install directory under your CloudFront distribution:

http://d111111abcdef8.cloudfront.net/install/

CloudFront does not return the default root object even if a copy of index.html appears in the install directory.

If you configure your distribution to allow all of the HTTP methods that CloudFront supports, the default root object applies to all methods. For example, if your default root object is index.php and you write your application to submit a POST request to the root of your domain (http://example.com), CloudFront sends the request to http://example.com/index.php.

The behavior of CloudFront default root objects is different from the behavior of Amazon S3 index documents. When you configure an Amazon S3 bucket as a website and specify the index document, Amazon S3 returns the index document even if a user requests a subdirectory in the bucket. (A copy of the index document must appear in every subdirectory.) For more information about configuring Amazon S3 buckets as websites and about index documents, see the Hosting Websites on Amazon S3 chapter in the Amazon Simple Storage Service Developer Guide.

S3 Bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowCloudFrontAccess",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::example.com/*"
        }
    ]
}

CloudFront setup: enter image description here

Thank you

Albert
  • 146
  • 2
  • 12
  • 1
    Turn on S3 static web hosting, and use the s3 website URL in CloudFront. – jellycsc Feb 19 '21 at 01:42
  • I already have it, in fact, the root path is working fine, the error is in the subfolders, somehow CloudFront is just able to reach index.html in the root path but in the subfolders not. – Albert Feb 19 '21 at 07:27
  • you were right, I found [here](https://stackoverflow.com/a/42285049/3002214) the response, turns out that setting the URL as a custom origin in CloudFront works fine, but I'm not completely sure it's the best option though, but at least works, thank you – Albert Feb 19 '21 at 16:27

1 Answers1

5

This got fixed, just to sum up what I did there are two solutions:

  • Adding the S3 URL as custom origin in CloudFront, the tradeoff is that this forces us to open the S3 bucket for anonymous traffic.
  • Setting up a Lambda@Edge that will translate the requests, the tradeoff is that we'll pay also Lambda requests.

So each one has to decide the option it fits better, in my case the traffic is expected to be super low so I chose the second option.

I leave some useful links in case anybody else faces the same problem:

  • Useful Reddit thread here
  • AWS Lambda@Edge+CloudFront explained by AWS here
  • Fix to Lambda error I faced here
  • All setup process explained here
Albert
  • 146
  • 2
  • 12
  • 1
    _"Adding the S3 URL as custom origin in CloudFront, the tradeoff is that this forces us to open the S3 bucket for anonymous traffic."_ This can be mitigated with, e.g. HTTP Basic Auth passwords for S3 or listening for a specific header value. Maybe there is a more intelligent way to restrict access using some AWS internal mojo, but those two are the most popular solutions I know about. – aries1980 Jul 31 '21 at 10:58
  • 1
    I'm hosting a public static website on s3, so just putting the bucket endpoint URL made it work. Before that I was using the bucket name as the origin. Thanks for the insight! – Diego Andrade Feb 08 '22 at 03:26