1

Short version: I'm getting https errors on a static website on AWS S3 bucket set up for web hosting, but not https. It is located by a CNAME record pointing to the S3 bucket on my AWS Route 53 hosted zone, where the A record goes to a different site, which does use https.

Long version:

I have a Rails site hosted at my apex url (idoimaging.com) on an AWS EC2 instance. I wish, independently of this, to host a blog as a static site (Jekyll) as a subdomain blog.idoimaging.com.

To test with a simple setup, I tried making a minimal static subdomain site hello.idoimaging.com. I made a test bucket named hello.idoimaging.com and in it I put small index.html and error.html files. I enabled website hosting in the bucket properties and added a read-all policy to the bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allow Public Access to All Objects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::hello.idoimaging.com/*"
        }
    ]
}

I can visit the bucket directly at its endpoint hello.idoimaging.com.s3-website-us-east-1.amazonaws.com and I see the index.html page. All good so far.

Now I want to set a CNAME so I can visit the static site at hello.idoimaging.com. In AWS Route 53 I have a hosted zone for my idoimaging.com domain, and in that domain I created a CNAME with name 'hello.idoimaging.com' and value 'hello.idoimaging.com.s3-website-us-east-1.amazonaws.com'.

dig results look OK:

$ dig hello.idoimaging.com
...
;; QUESTION SECTION:
;hello.idoimaging.com.      IN  A
...
;; ANSWER SECTION:
hello.idoimaging.com.   226 IN  CNAME   hello.idoimaging.com.s3-website-us-east-1.amazonaws.com.
hello.idoimaging.com.s3-website-us-east-1.amazonaws.com. 60 IN CNAME s3-website-us-east-1.amazonaws.com.
s3-website-us-east-1.amazonaws.com. 3 IN A  52.216.64.90
...
;; AUTHORITY SECTION:
s3-website-us-east-1.amazonaws.com. 1778 IN NS  ns-1133.awsdns-13.org.
s3-website-us-east-1.amazonaws.com. 1778 IN NS  ns-1919.awsdns-47.co.uk.
s3-website-us-east-1.amazonaws.com. 1778 IN NS  ns-490.awsdns-61.com.
s3-website-us-east-1.amazonaws.com. 1778 IN NS  ns-661.awsdns-18.net.

Initially when I tried to visit hello.idoimaging.com I just got a timeout. I read a post somewhere about right-clicking 'Make public' on the objects in the bucket. This didn't sound right to me, as I thought that's what bucket policies are for, but when I tried it I got a change. Under Permissions I now have Grantee: Everyone nad permission Open/Download, and though it's still not working, now I get a HTTPS security error instead of the timeout. So it seemed that 'Make Public' (which I've never had to use before) made a difference. Progress, I guess?

Using curl I can fetch hello.idoimaging.com and it retrieves the index.html file, no worries, even if I use --proto https. wget and any browser, however, won't.

All requests to hello.idoimaging.com now are forced to https://, which is failing with "Your connection is not private" / "This site uses HTTP Strict Transport Security (HSTS)" and various different messages on different browsers. Is this force-to-https behaviour normal? The reason I ask is that my apex site, in the nginx server, redirects http requests to https. But if I request hello.idoimaging.com the DNS will pick up the CNAME for my S3 site, not the A record for my apex site, right? Seems they can't be related. The apex site is secured with local certificates from letsencrypt.org.

Once I get this going I want to use CloudFront, but I'm having quite enough difficulty with the S3 site for now.

It seems the problem results from requests to hello.idoimaging.com (typed just as that) being forced to https, and that is failing. Looking for advice. If there is a problem with my apex site being https and the subdomain not being https, it seems I'm going to complicate it by trying to set up https on the subdomain, because it would be using different certificates from the apex site.

All this stuff is up and live now.

idoimaging
  • 764
  • 1
  • 7
  • 14
  • 1
    I am inclined to believe that this will prove to be a duplicate of [Amazon AWS 307 response and permanent redirect to HTTPS](http://stackoverflow.com/a/28595295/1695906) -- you've configured HSTS on your base domain and it seems like *the browser* is extending it to subdomains. The service isn't doing this. It can't be, becauae web site hosting buckets can't do HTTPS without help from CloudFront. – Michael - sqlbot Feb 21 '17 at 00:57
  • This looks very promising! My nginx server does not have the Strict-Transport-Security header configured, but it certainly would explain the behaviour. Also why curl can fetch the index.html page but a browser can't. I'm reading up on HSTS. It seems it would be a good thing to have? As I'm going to put my S3 bucket behind CloudFront, which supports TLS, should I just move ahead with that? I'd thought it'd be simpler to start with http, maybe it's causing the problem. – idoimaging Feb 21 '17 at 01:13
  • 1
    You were right, in principle, to try the simpler configuration first, but in this case it's probably complicating things. I suspect you must have had HSTS configured at some point, for this to be happening. Browsers remember that flag once they see it. Note also, with CloudFront, [don't select the bucket name from the drop-down list](http://stackoverflow.com/a/34065543/1695906) when the bucket is configured for web site hosting. Type the web site endpoint hostname (your current CNAME target) into the origin hostname box. – Michael - sqlbot Feb 21 '17 at 03:43
  • 1
    *Using curl I can fetch hello.idoimaging.com and it retrieves the index.html file, no worries, even if I use --proto https* This can't be right. It's a sign that at least at the time you tested with curl, the DNS wasn't as shown, or wasn't resolving correctly and you were hitting a different endpoint. Curl would work because it doesn't follow redirects unless you supply the `--location` option, and S3 web site hosting would never force-redirect you to https. This implies you're hitting the wrong machine. Try `curl -v` to check what IP address it hits, and troubleshoot from there. – Michael - sqlbot Feb 21 '17 at 03:48
  • Thanks for much guidance! I think I've found something. I generated the SSL certs from letsencrypt.org and I used [this tutorial](http://do.co/2kGqQQZ) from Digital Ocean (God bless 'em, their tutorials are great). In there is the line `add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";` that explains pretty much everything. I believe I can remove the subdomain reference and I've read I can set the max-age to 0 to clear HSTS from browser caches? Still working on it. – idoimaging Feb 21 '17 at 05:21
  • Progress! I removed `includeSubdomains` from the `add_header Strict-Transport-Security` in nginx, restarted, and cleared HSTS for the site in Chrome using `chrome://net-internals/#hsts`. **Now** I can see [hello.idoimaging.com](http://hello.idoimaging.com) in http. I moved on to my actual static site [blog.idoimaging.com](http://blog.idoimaging.com) and for one glorious moment it worked in http. Then I enabled redirect to https in its CloudFront config and now I have problems with custom DNS as the SSL cert comes from cloudfront.net. I'll try to upload my letsencrypt SSL certificate. – idoimaging Feb 21 '17 at 16:47
  • lol, I thought you said you weren't using HSTS. With CloudFront, it's much easier to just get a free certificate from Amazon Certificate Manager, so you don't have to update it. Let's Encrypt certs have to be rotated every 90 days or less. – Michael - sqlbot Feb 22 '17 at 00:09
  • I've done just the AWS certificate and applied it to my CloudFront distribution, and everything works now. Thanks! If you'd like to move your comments to an answer, I'll accept it. – idoimaging Feb 22 '17 at 00:38

1 Answers1

1
  • Did you specify index.html as index document ?
  • Also the recommended way to set custom DNS for bucket is to use type A record with Alias type. In the alias target put s3-website-us-east-1.amazonaws.com.

P.S. regarding SSL, s3 does not offer ssl for websites with custom DNS, the only option would be to add CloudFront in front.

P.S. clear your browser's cache, or try in incognito mode: enter image description here

b.b3rn4rd
  • 8,494
  • 2
  • 45
  • 57
  • Cheers for that. I did specify index.html. As you suggested I changed from a CNAME to an A alias with the s3 endpoint (popped up as a suggestion), but now it's timing out. I notice when I try to wget hello.idoimaging.com it says "URL transformed to HTTPS due to an HSTS policy", could this be related to the https problem? If I connect directly to the S3 bucket endpoint, it stays in http. – idoimaging Feb 21 '17 at 00:50
  • I see for you it stayed http and worked. For me after I changed from CNAME to A alias it still forces to https, and still times out. I'm trying from incognito/cleared cache. – idoimaging Feb 21 '17 at 01:01
  • I cant see Strict-Transport-Security header in the response... in fact s3 does not support HSTS, according to: https://forums.aws.amazon.com/thread.jspa?threadID=162252 – b.b3rn4rd Feb 21 '17 at 02:45
  • That was misleading of me - it's my apex site that runs nginx and may have started with the HSTS (which I'm just now learning about), although I don't see Strict-Transport-Security in the config. @Michael-sqlbot's note above makes me believe the browser is caching the HSTS from the main site and extending it to subdomains. Which may explain why you could visit the subdomain, having never visited the top domain. Work continues... – idoimaging Feb 21 '17 at 05:11