197

I've been looking for ways of making my site load faster and one way that I'd like to explore is making greater use of Cloudfront.

Because Cloudfront was originally not designed as a custom-origin CDN and because it didn't support gzipping, I have so far been using it to host all my images, which are referenced by their Cloudfront cname in my site code, and optimized with far-futures headers.

CSS and javascript files, on the other hand, are hosted on my own server, because until now I was under the impression that they couldn't be served gzipped from Cloudfront, and that the gain from gzipping (about 75 per cent) outweighs that from using a CDN (about 50 per cent): Amazon S3 (and thus Cloudfront) did not support serving gzipped content in a standard manner by using the HTTP Accept-Encoding header that is sent by browsers to indicate their support for gzip compression, and so they were not able to Gzip and serve components on the fly.

Thus I was under the impression, until now, that one had to choose between two alternatives:

  1. move all assets to the Amazon CloudFront and forget about GZipping;

  2. keep components self-hosted and configure our server to detect incoming requests and perform on-the-fly GZipping as appropriate, which is what I chose to do so far.

There were workarounds to solve this issue, but essentially these didn't work. [link].

Now, it seems Amazon Cloudfront supports custom origin, and that it is now possible to use the standard HTTP Accept-Encoding method for serving gzipped content if you are using a Custom Origin [link].

I haven't so far been able to implement the new feature on my server. The blog post I linked to above, which is the only one I found detailing the change, seems to imply that you can only enable gzipping (bar workarounds, which I don't want to use), if you opt for custom origin, which I'd rather not: I find it simpler to host the coresponding fileds on my Cloudfront server, and link to them from there. Despite carefully reading the documentation, I don't know:

  • whether the new feature means the files should be hosted on my own domain server via custom origin, and if so, what code setup will achieve this;

  • how to configure the css and javascript headers to make sure they are served gzipped from Cloudfront.

Jo Liss
  • 30,333
  • 19
  • 121
  • 170
Donald Jenkins
  • 3,485
  • 8
  • 33
  • 35

6 Answers6

205

UPDATE: Amazon now supports gzip compression, so this is no longer needed. Amazon Announcement

Original answer:

The answer is to gzip the CSS and JavaScript files. Yes, you read that right.

gzip -9 production.min.css

This will produce production.min.css.gz. Remove the .gz, upload to S3 (or whatever origin server you're using) and explicitly set the Content-Encoding header for the file to gzip.

It's not on-the-fly gzipping, but you could very easily wrap it up into your build/deployment scripts. The advantages are:

  1. It requires no CPU for Apache to gzip the content when the file is requested.
  2. The files are gzipped at the highest compression level (assuming gzip -9).
  3. You're serving the file from a CDN.

Assuming that your CSS/JavaScript files are (a) minified and (b) large enough to justify the CPU required to decompress on the user's machine, you can get significant performance gains here.

Just remember: If you make a change to a file that is cached in CloudFront, make sure you invalidate the cache after making this type of change.

Jackson
  • 9,188
  • 6
  • 52
  • 77
Skyler Johnson
  • 3,833
  • 1
  • 20
  • 6
  • 37
    After reading your link, I must say that the blog author is uninformed. "However, if the user does have a browser that does not support gzip encoding, your site’s zipped stylesheets and javascripts simply will not work for that user." This browser would likely be too old to run your stylesheets and script files anyway. These users make up a fraction of a percent. – Skyler Johnson Mar 27 '11 at 04:14
  • 1
    Very interesting. I'd thought of something along those lines, but been put off it by the statement in th blog post I linked to and that you quoted. Yes, you're right, my site already includes other features that effectively rule out it being readable by that type of browser. I'll try your solution out and report back here on how it goes. – Donald Jenkins Mar 27 '11 at 09:30
  • Hm… gzipped the css file; uploaded it to Cloudfront; linked to it in the stylesheet of a sandbox site [http//edge.donaldjenkins.net/](http://edge.donaldjenkins.net/): Web Sniffer shows headers as properly set [[link](http://cl.ly/5XYT)]; when I download the file content is there, but Firebug says it contains no styles. Sandbox site uses conditional statements to set different stylesheets for the front page (uses local stylesheet) and other pages (use Cloudfront copy): front page displays normally; other pages don't. – Donald Jenkins Mar 27 '11 at 10:00
  • 3
    UPDATE: I worked it out. The reason it wasn't displaying was that I'd forgotten to set Content-Type to text/css. If you do that, you're fine, although for some reason it seems you can't add an "Accept-Encoding: Vary" header in S3 (which would help with the Google Speed rating) for the reasons described here: [[link](https://forums.aws.amazon.com/thread.jspa?messageID=192018&tstart=0)]. Also, I set Cache-control to cache the asset, but it doesn't seem to be caching it... – Donald Jenkins Mar 27 '11 at 12:07
  • After testing it in the sandbox site, I applied your solution to my blog, moving all javascript and css files to two folders in my Cloudfront S3 bucket. This basically halved the size of the css files and caused a noticeable increase in speed. The only remaining issues are: (1) I don't think you can remove ETags in S3, so these are still being attached to the files: not sure what the caching consequences of this are; (2) Google Speed is still complaining 'Specify a Vary: Accept-Encoding header' regarding the assets I moved to S3 which has brought my rating down to 94 from 96. – Donald Jenkins Mar 27 '11 at 15:15
  • 32
    Just found this via Google, and I'm sorry to have to say this isn't that good advice. While <1% of *desktop* browsers can't handle gzipped content, quite many *mobile browsers* cannot. How many depends on which target audience you're looking at; but most older Nokia S40's have buggy gzip compression for example. The proper way is a "Custom Origin", which points to an Apache/IIS webserver which does content compression and serves the proper HTTP headers. Here is one blog post that describes the gist of it: http://www.nomitor.com/blog/2010/11/10/gzip-support-for-amazon-web-services-cloudfront/ –  Jul 28 '11 at 23:18
  • Before dismissing those who can't accept gzipped content as a tiny portion of your audience, you may want to take a look at this: [link](http://www.stevesouders.com/blog/2009/11/11/whos-not-getting-gzip/) – Simon Peck May 30 '12 at 01:17
  • Here is the AWS documentation that walks you through the same process to serve these files from S3: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html#ChoosingFilesToCompress – Dan Esparza Jul 26 '13 at 14:42
  • 14
    How's the situation now, in early 2015? Are the links posted by @JesperMortensen and Simon Peck still relevant? – ItalyPaleAle Jan 14 '15 at 15:09
  • 5
    Amazon announced support for gzip compression in December of 2015, so this is now irrelevant just upload the basic file and it will work. https://aws.amazon.com/blogs/aws/new-gzip-compression-support-for-amazon-cloudfront/ – Sean Feb 02 '16 at 16:10
  • Removing the `.gz` did it for me – Zach Saucier Jun 22 '16 at 23:56
  • 1
    If your origin is S3 you need to update the bucket to add a cors configuration that allows the Content-Length header to be exposed. See http://ithoughthecamewithyou.com/post/enable-gzip-compression-for-amazon-s3-hosted-website-in-cloudfront. – Robert Ellison Dec 10 '16 at 02:35
15

My answer is a take off on this: http://blog.kenweiner.com/2009/08/serving-gzipped-javascript-files-from.html

Building off skyler's answer you can upload a gzip and non-gzip version of the css and js. Be careful naming and test in Safari. Because safari won't handle .css.gz or .js.gz files.

site.js and site.js.jgz and site.css and site.gz.css (you'll need to set the content-encoding header to the correct MIME type to get these to serve right)

Then in your page put.

<script type="text/javascript">var sr_gzipEnabled = false;</script> 
<script type="text/javascript" src="http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr.gzipcheck.js.jgz"></script> 

<noscript> 
  <link type="text/css" rel="stylesheet" href="http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css">
</noscript> 
<script type="text/javascript"> 
(function () {
    var sr_css_file = 'http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css';
    if (sr_gzipEnabled) {
      sr_css_file = 'http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css.gz';
    }

    var head = document.getElementsByTagName("head")[0];
    if (head) {
        var scriptStyles = document.createElement("link");
        scriptStyles.rel = "stylesheet";
        scriptStyles.type = "text/css";
        scriptStyles.href = sr_css_file;
        head.appendChild(scriptStyles);
        //alert('adding css to header:'+sr_css_file);
     }
}());
</script> 

gzipcheck.js.jgz is just sr_gzipEnabled = true; This tests to make sure the browser can handle the gzipped code and provide a backup if they can't.

Then do something similar in the footer assuming all of your js is in one file and can go in the footer.

<div id="sr_js"></div> 
<script type="text/javascript"> 
(function () {
    var sr_js_file = 'http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr-br-min.js';
    if (sr_gzipEnabled) {
       sr_js_file = 'http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr-br-min.js.jgz';
    }
    var sr_script_tag = document.getElementById("sr_js");         
    if (sr_script_tag) {
    var scriptStyles = document.createElement("script");
    scriptStyles.type = "text/javascript";
    scriptStyles.src = sr_js_file;
    sr_script_tag.appendChild(scriptStyles);
    //alert('adding js to footer:'+sr_js_file);
    }
}());
</script> 

UPDATE: Amazon now supports gzip compression. Announcement, so this is no longer needed. Amazon Announcement

theUtherSide
  • 3,338
  • 4
  • 36
  • 35
Sean
  • 645
  • 1
  • 6
  • 21
  • thanks very much for that suggestion. If I understand you correctly, you are addressing the case where the user's browser is not able to read the gzipped file, which can still occur although it concerns a fairly tiny percentage of browsers nowadays. One possible drawback of this solution, if you refer to the link I posted in my question [[link](http://www.alfajango.com/blog/how-to-combine-gzip-plus-cdn-for-fastest-page-loads/)] is that it means you can't cache your page, since it'll only work if your code is run dynamically every time a user loads the page (which of course mine is). – Donald Jenkins Apr 01 '11 at 06:57
  • @DonaldJenkins I think that the js will still be cached. When you build the script tag in the js snip, the js still has to be called and I believe that if it is in cache the browser would use it from there. – Sean Jul 25 '12 at 15:50
  • 2
    The test page http://blog.kosny.com/testpages/safari-gz/ indicates that the warning "Be careful naming and test in Safari. Because safari won't handle css.gz or js.gz" is out of date. In Safari 7 on Mavericks, and in Safari on iOS 7, both css.gz and js.gz work. I don't know when this change occurred, I'm only testing with the devices I have. – garyrob Nov 14 '13 at 15:46
14

Cloudfront supports gzipping.

Cloudfront connects to your server via HTTP 1.0. By default some webservers, including nginx, dosn't serve gzipped content to HTTP 1.0 connections, but you can tell it to do by adding:

gzip_http_version 1.0

to your nginx config. The equivalent config could be set for whichever web server you're using.

This does have a side effect of making keep-alive connections not work for HTTP 1.0 connections, but as the benefits of compression are huge, it's definitely worth the trade off.

Taken from http://www.cdnplanet.com/blog/gzip-nginx-cloudfront/

Edit

Serving content that is gzipped on the fly through Amazon cloud front is dangerous and probably shouldn't be done. Basically if your webserver is gzipping the content, it will not set a Content-Length and instead send the data as chunked.

If the connection between Cloudfront and your server is interrupted and prematurely severed, Cloudfront still caches the partial result and serves that as the cached version until it expires.

The accepted answer of gzipping it first on disk and then serving the gzipped version is a better idea as Nginx will be able to set the Content-Length header, and so Cloudfront will discard truncated versions.

Kerem Baydoğan
  • 10,475
  • 1
  • 43
  • 50
Danack
  • 24,939
  • 16
  • 90
  • 122
  • 5
    -1, This answer has nothing to do with the question. Nginx != S3 and Cloudfront – Jonathan May 01 '13 at 19:40
  • @Danack, did you experience lots of issues with Cloudfront caching half-retrieved files because of this problem? I'm trying to understand how much of a problem this was for you in practice. – poshest Mar 24 '16 at 10:35
  • 1
    @poshest It happened. There was very little benefit to serving gzipped on the fly (as gzip is so fast on the server anyway,) so I turned it off as soon as I saw it happening. Corrupted data is a much bigger problem than having a "time to first byte" be 200ms slow in the rare cases where the content doesn't already exist in gzipped format. – Danack Mar 24 '16 at 21:45
  • If an asset is missing a Content-Length property in the header but includes Transfer-Encoding: chunked (as is often the case with gzipped assets), CloudFront will NOT cache a partial asset if it doesn't receive a terminating chunk. If it's missing both of these properties, then it's possible for an incomplete asset to be cached. See: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#ResponseCustomDroppedTCPConnections – Cody Duval Apr 14 '17 at 18:24
5

We've made a few optimisations for uSwitch.com recently to compress some of the static assets on our site. Although we setup a whole nginx proxy to do this, I've also put together a little Heroku app that proxies between CloudFront and S3 to compress content: http://dfl8.co

Given publicly accessible S3 objects can be accessed using a simple URL structure, http://dfl8.co just uses the same structure. I.e. the following URLs are equivalent:

http://pingles-example.s3.amazonaws.com/sample.css
http://pingles-example.dfl8.co/sample.css
http://d1a4f3qx63eykc.cloudfront.net/sample.css
pingles
  • 111
  • 2
  • 3
5

Yesterday amazon announced new feature, you can now enable gzip on your distribution.

It works with s3 without added .gz files yourself, I tried the new feature today and it works great. (need to invalidate you're current objects though)

More info

Chris
  • 3,795
  • 3
  • 17
  • 23
0

You can configure CloudFront to automatically compress files of certain types and serve the compressed files.

See AWS Developer Guide

Rafi
  • 1
  • 1