84

I've set up Amazon S3 to serve my static site, speakeasylinguistics.com. All of the DNS stuff seems to be working okay, because dig +recurse +trace www.speakeasylinguistics.com outputs the correct DNS info.

But when you visit the site in a browser using the endpoint, the index.html page downloads, instead of being served. How do I fix this?

I've tried Chrome, Safari, FF. It happens on all of them. I used Amazon's walkthrough on hosting a custom domain to a T.

nickcoxdotme
  • 6,567
  • 9
  • 46
  • 72
  • 2
    running curl -I against the file returns: `Content-Disposition: attachment` in the headers -- that is what is causing the problem. I *think* that is in the meta data for the file. – dc5 Aug 18 '13 at 08:00
  • 2
    I solved this problem specifying the metadata (content-type = text/html) when uploading the html file to S3 – Pedro Hidalgo Nov 16 '16 at 20:50

11 Answers11

65

If you are using Hashicorp Terraform you can specify the content-type on an aws_s3_bucket_object as follows

resource "aws_s3_bucket_object" "index" {
  bucket = "yourbucketnamehere"
  key = "index.html"
  content = "<h1>Hello, world</h1>"

  content_type = "text/html"
}

This should serve your content appropriately in the browser.

Edit 24/05/22: As mentioned in the comments on this answer, Terraform now has a module to help with uploading files and setting their content-type attribute correctly

James G
  • 2,069
  • 16
  • 28
  • 3
    Thank you for the Terraform mention! – Jonathan Le Mar 12 '19 at 19:41
  • 1
    Terraform documentation should have mentioned it. – Tara Prasad Gurung May 08 '20 at 03:24
  • terraform def doesn't mention this clearly. thanks for this! – shan Jun 09 '20 at 21:33
  • 7
    This is exactly what I needed, thank you. Also to add, due to some caches, if one keep retrying the website endpoint, the issue would appear to still exist. Try doing it in incognito or after clearing caches, it will serve the content on the browser. – Amitabh Ghosh Sep 07 '20 at 07:15
  • This is exactly what I was looking for. – Jananath Banuka Mar 16 '21 at 17:10
  • This is exactly the detail I needed. Was driving me crazy! :D Thanks, mate! – Michael J Mar 30 '21 at 07:43
  • 1
    and if you're using cloudfront, make sure you create an invalidation, to drop the cache – dyasny Dec 30 '21 at 17:39
  • There is a new way of identifying content type in Terraform. I followed this example, but it still downloaded the html file because the metadata - system defined - content-type was set to binary. This answer fixed the problem: https://stackoverflow.com/questions/57456167/uploading-multiple-files-in-aws-s3-from-terraform/58827910#58827910 – Charles Letcher May 23 '22 at 11:53
63

Running curl -I against the url you posted gives the following result:

curl -I http://speakeasylinguistics.com.s3-website-us-east-1.amazonaws.com/
HTTP/1.1 200 OK
x-amz-id-2: DmfUpbglWQ/evhF3pTiXYf6c+gIE8j0F6mw7VmATOpfc29V5tb5YTeojC68jE7Rd
x-amz-request-id: E233603809AF9956
Date: Sun, 18 Aug 2013 07:58:55 GMT
Content-Disposition: attachment
Last-Modified: Sun, 18 Aug 2013 07:05:20 GMT
ETag: "eacded76ceb4831aaeae2805c892fa1c"
Content-Type: text/html
Content-Length: 2585
Server: AmazonS3

This line is the culprit:

Content-Disposition: attachment

If you are using the AWS console, I believe this can be changed by selecting the file in S3 and modifying its meta data by removing this property.

dc5
  • 12,341
  • 2
  • 35
  • 47
  • I'm facing this issue without the "Content-Disposition" parameter defined. However, I noticed this setting ```content-type: application/x-directory; charset=UTF-8``` and I suspect this could be the issue. Is there a way I can change it to `text/html` ? – Vishwas M.R Jun 17 '21 at 15:37
  • How to remove the property ? – Sujay U N Jan 04 '22 at 18:46
33

If you are doing this programmatically you can set the ContentType and/or ContentDisposition params in your upload.

[PHP Example]

      $output = $s3->putObject(array(
          'Bucket' => $bucket,
          'Key' => md5($share). '.html',
          'ContentType' => 'text/html',
          'Body' => $share,
      ));

putObject Docs

Brombomb
  • 6,988
  • 4
  • 38
  • 57
12

For anyone else facing this issue, there's a typo in the URL you can find under Properties > Static website hosting. For instance, the URL provided is

http://{bucket}.s3-website-{region}.amazonaws.com

but it should be

http://{bucket}.s3-website.{region}.amazonaws.com

Note the . between website and region.

CPak
  • 13,260
  • 3
  • 30
  • 48
11

if you guys are trying to upload it with Boto3 and python 3.7 or above try with

s3 = boto3.client('s3')
S3.upload_file(local_file,bucket,S3_file,ExtraArgs={'ContentType':'text/html'})

for update Content-Type

dannisis
  • 423
  • 7
  • 17
7

I had the same problem when uploading to an S3 static site from NodeJS. As others have mentioned, the issue was caused by missing the content-type when uploading the file. When using the web interface, the content-type is automatically applied for you; however, when manually uploading you will need to specify it. List of S3 Content Types.

In NodeJS, you can attach the content type like so:

const { extname } = require('path');
const { createReadStream } = require('fs');

// add more types as needed
const getMimeType = ext => {
    switch (ext) {
        case '.js':
            return 'application/javascript';
        case '.html':
            return 'text/html';
        case '.txt':
            return 'text/plain';
        case '.json':
            return 'application/json';
        case '.ico':
            return 'image/x-icon';
        case '.svg':
            return 'image/svg+xml';
        case '.css':
            return 'text/css'
        case '.jpg':
        case '.jpeg':
            return 'image/jpeg';
        case '.png':
            return 'image/png';
        case '.webp':
            return 'image/webp';
        case '.map':
            return 'binary/octet-stream'
        default:
            return 'application/octet-stream'    
    }
};

(async() => {
    const file = './index.html';
    const params = {
        Bucket: 'myBucket',
        Key: file,
        Body: createReadStream(file),
        ContentType: getMimeType(extname(file)),
    };
    await s3.putObject(params).promise();
})();
Josh Weston
  • 1,632
  • 22
  • 23
2

I have recently had the same issue popping up, the problem was a change of behavior of CloudFront & S3 Origin, If your S3Bucket is configured to serve a static website, you need to change your origin to be the HTTPS:// endpoint instead of picking the S3 origin from the pulldown, if you are using terraform, your origin should be aws_s3_bucket.var.website_endpoint instead of aws_s3_bucket.var.bucket_domain_name

Refer to the AWS documentation here

codaddict
  • 307
  • 2
  • 5
2

I recently came across this issue and the root cause seems to be that object versioning was enabled. After disabling versioning on the bucket the index HTML was served as expected.

rpf3
  • 651
  • 1
  • 10
  • 21
1

I've been through the same issue and I have resolved this way. At S3 Bucket, click o index.html checkbox, click con Actions tab, Edit Metadata, and you will notice that in Metadata options says "Type: System defined, Key: Content-Type, Value: binary/octet-stream". Change Value and put "html" and save the changes. Then click at index.html, "Open" button. That worked for me.

1

Here is a solution for uploading a directory (including subdirectories) to s3 while setting the content-type.

locals {
  mime_types = {
    ".html" = "text/html"
    ".css" = "text/css"
    ".js" = "application/javascript"
    ".ico" = "image/vnd.microsoft.icon"
    ".jpeg" = "image/jpeg"
    ".png" = "image/png"
    ".svg" = "image/svg+xml"
  }
}
resource "aws_s3_object" "upload_assets" {
  bucket = aws_s3_bucket.www_bucket.bucket
  for_each = fileset(var.build_path, "**")
  key = each.value
  source = "${var.build_path}/${each.value}"
  content_type = lookup(local.mime_types, regex("\\.[^.]+$", each.value), null)
  etag = filemd5("${var.build_path}/${each.value}")
}

var.build_path is the directory containing your assets. This line:

content_type = lookup(local.mime_types, regex("\\.[^.]+$", each.value), null)

gets the file extension by matching the regex and then use provided locals map to lookup the correct content_type

Credit: https://engineering.statefarm.com/blog/terraform-s3-upload-with-mime/

artronics
  • 1,399
  • 2
  • 19
  • 28
0

If you are using AWS S3 Bitbucket Pipelines Python, then add the parameter content_type as follow:

s3_upload.py

def upload_to_s3(bucket, artefact, bucket_key, content_type):
...

def main():
...
    parser.add_argument("content_type", help="Content Type File")
...

if not upload_to_s3(args.bucket, args.artefact, args.bucket_key, args.content_type):

and modify bitbucket-pipelines.yml as follow:

...
- python s3_upload.py bucket_name file key content_type 
...

Where content_type param can be one of following: MIME types (IANA media types)

e-israel
  • 623
  • 10
  • 30