156

I'm using Amazon's CloudFront to serve static files of my web apps.

Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed?

Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

TimS
  • 5,922
  • 6
  • 35
  • 55
Martin
  • 5,197
  • 11
  • 45
  • 60
  • 1
    possible duplicate of [How can I update files on Amazon's CDN (CloudFront)?](http://stackoverflow.com/questions/1086240/how-can-i-update-files-on-amazons-cdn-cloudfront) – Steffen Opel May 21 '12 at 09:20
  • as a sidenote, I don't think it's stupid to name static files like that. We've been using it a lot and having automated renaming as per file version in version control has saved us a lot of headaches. – eis May 21 '12 at 09:23
  • 1
    @eis unless the file you need to replace has been linked to 1000 different places online. Good luck getting all those links updated. – Jake Wilson Jul 31 '12 at 19:53
  • @Jakobud why should the links be updated in that case? they're referring to specific version, which is not the latest, if the file has been changed. If the file has not been changed, it'll work as it did before. – eis Jul 31 '12 at 22:26
  • 6
    In some cases a company may make a mistake in posting the wrong image for something or some other type of item where they receive a takedown notice from a law firm and have to replace the file. Simply uploading a new file with a new name isn't going to fix that kind of problem, which is unfortunately a problem that is more and more common these days. – Jake Wilson Aug 01 '12 at 20:26
  • I have summarized the possible solutions in this answer on the duplicate question that @SteffenOpel mentioned, at https://stackoverflow.com/a/66976601. – Aidin Apr 06 '21 at 21:22

13 Answers13

140

Good news. Amazon finally added an Invalidation Feature. See the API Reference.

This is a sample request from the API Reference:

POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml

<InvalidationBatch>
   <Path>/image1.jpg</Path>
   <Path>/image2.jpg</Path>
   <Path>/videos/movie.flv</Path>
   <CallerReference>my-batch</CallerReference>
</InvalidationBatch>
James Lawruk
  • 30,112
  • 19
  • 130
  • 137
  • 9
    Please note that invalidation will take some time (apparently 5-30 minutes according to some blog posts I've read). – Michael Warkentin Mar 04 '12 at 00:54
  • 39
    If you do not want to make an API request yourself, you can also log in to the Amazon Console and create an Invalidation request there: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html – j0nes Jul 26 '12 at 06:50
  • For those of you using the API to do the invalidation, approximately how long is it taking for the invalidation to take effect? – ill_always_be_a_warriors Jan 16 '13 at 00:57
  • @ill_always_be_a_warriors About 10-15 minutes per http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html – George Mandis Feb 21 '14 at 08:30
  • 22
    Remember this costs $0.005 per file after your first 1,000 invalidation requests per month https://aws.amazon.com/cloudfront/pricing/ – TimS May 08 '14 at 11:34
  • 1
    @MichaelWarkentin After making an API `createInvalidation` request, i'm still seeing the update take 5-10 minutes or so to invalidate. Notice I write this comment *4* years after yours. – tim peterson Mar 06 '16 at 21:33
  • where/how can the AWS auth string be obtained? does it need to be generated from pub/priv keys or? – tkit Jan 04 '17 at 13:59
  • It takes exactly 10 minutes for me. – István Ujj-Mészáros Jan 06 '17 at 06:01
  • In 2018, it's almost instant. Notice I'm deploying a static website via S3. – Paul Razvan Berg Dec 24 '18 at 17:56
  • 3
    As of 2020, the cost is $0.005 per path, not file anymore. So if you invalidate a path like `/*` - *all files* - you only pay for one invalidation regardless the number of files/URLs. Also AWS provides 1000 Invalidation paths per month for free. See **Invalidation requests** in the [docs](https://aws.amazon.com/cloudfront/pricing/) – Wesley Gonçalves Oct 26 '20 at 09:31
  • April 2021 Updates: 1) First 1000 invalidations are free. 2) it's $0.005 per path after that (not per file.) 3) It takes around one minute for ~100 files. 4) You can do it in AWS CLI via `aws cloudfront create-invalidation --distribution-id E1234567890 --paths "/*"` in one line in terminal, 5) To deal with how to also update the path to point to a different version/path in S3, see my answer at https://stackoverflow.com/a/66976601. – Aidin Apr 06 '21 at 21:25
20

As of March 19, Amazon now allows Cloudfront's cache TTL to be 0 seconds, thus you (theoretically) should never see stale objects. So if you have your assets in S3, you could simply go to AWS Web Panel => S3 => Edit Properties => Metadata, then set your "Cache-Control" value to "max-age=0".

This is straight from the API documentation:

To control whether CloudFront caches an object and for how long, we recommend that you use the Cache-Control header with the max-age= directive. CloudFront caches the object for the specified number of seconds. (The minimum value is 0 seconds.)

John K. Chow
  • 1,651
  • 16
  • 24
  • Where is this setting in the new AWS Console UI? I can't find it. – ill_always_be_a_warriors Jan 18 '13 at 00:51
  • 1
    I found the setting for an individual file, but is there a setting to make it so that anything uploaded to my bucket has a TTL of 0? – ill_always_be_a_warriors Jan 18 '13 at 01:01
  • While I would also definitely be interested in a bucket-wide setting, I found this a quicker/better solution. Invalidation requests (along with the rest of the API) are very confusing and poorly documented, and I spun my wheels for 3 hours before this instantly worked. – Two-Bit Alchemist Aug 01 '14 at 15:38
  • 41
    Call me crazy but setting the TTL to 0 and max-age to 0 is really using CloudFront without caching, wouldn't that forward all requests to the origin constantly checking for updates? Essentially making the CDN useless? – acidjazz Sep 26 '15 at 23:56
  • 8
    If you're just using cloudfront as a mechanism to have a static SSL-enabled S3 site with a custom domain, then caching doesn't matter. Also, these issues we're discussing is that in development phases 0-time caching is good. – Dan G Feb 12 '18 at 14:40
  • @ill_always_be_a_warriors no, you have to [manually do it](https://stackoverflow.com/questions/10435334/set-cache-control-for-entire-s3-bucket-automatically-using-bucket-policies) for all files. I set this but CloudFront didn't refresh my content. I think I'll just give up, CloudFront seems like overkill for a small static website, I'll default to Cloudflare's Flexible SSL – Paul Razvan Berg Dec 24 '18 at 17:39
10

Automated update setup in 5 mins

OK, guys. The best possible way for now to perform automatic CloudFront update (invalidation) is to create Lambda function that will be triggered every time when any file is uploaded to S3 bucket (a new one or rewritten).

Even if you never used lambda functions before, it is really easy -- just follow my step-by-step instructions and it will take just 5 mins:

Step 1

Go to https://console.aws.amazon.com/lambda/home and click Create a lambda function

Step 2

Click on Blank Function (custom)

Step 3

Click on empty (stroked) box and select S3 from combo

Step 4

Select your Bucket (same as for CloudFront distribution)

Step 5

Set an Event Type to "Object Created (All)"

Step 6

Set Prefix and Suffix or leave it empty if you don't know what it is.

Step 7

Check Enable trigger checkbox and click Next

Step 8

Name your function (something like: YourBucketNameS3ToCloudFrontOnCreateAll)

Step 9

Select Python 2.7 (or later) as Runtime

Step 10

Paste following code instead of default python code:

from __future__ import print_function

import boto3
import time

def lambda_handler(event, context):
    for items in event["Records"]:
        path = "/" + items["s3"]["object"]["key"]
        print(path)
        client = boto3.client('cloudfront')
        invalidation = client.create_invalidation(DistributionId='_YOUR_DISTRIBUTION_ID_',
            InvalidationBatch={
            'Paths': {
            'Quantity': 1,
            'Items': [path]
            },
            'CallerReference': str(time.time())
            })

Step 11

Open https://console.aws.amazon.com/cloudfront/home in a new browser tab and copy your CloudFront distribution ID for use in next step.

Step 12

Return to lambda tab and paste your distribution id instead of _YOUR_DISTRIBUTION_ID_ in the Python code. Keep surrounding quotes.

Step 13

Set handler: lambda_function.lambda_handler

Step 14

Click on the role combobox and select Create a custom role. New tab in browser will be opened.

Step 15

Click view policy document, click edit, click OK and replace role definition with following (as is):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
          "cloudfront:CreateInvalidation"
      ],
      "Resource": [
          "*"
      ]
    }
  ]
}

Step 16

Click allow. This will return you to a lambda. Double check that role name that you just created is selected in the Existing role combobox.

Step 17

Set Memory (MB) to 128 and Timeout to 5 sec.

Step 18

Click Next, then click Create function

Step 19

You are good to go! Now on, each time you will upload/reupload any file to S3, it will be evaluated in all CloudFront Edge locations.

PS - When you are testing, make sure that your browser is loading images from CloudFront, not from local cache.

PSS - Please note, that only first 1000 files invalidation per month are for free, each invalidation over limit cost $0.005 USD. Also additional charges for Lambda function may apply, but it is extremely cheap.

Josue Alexander Ibarra
  • 8,269
  • 3
  • 30
  • 37
Kainax
  • 1,431
  • 19
  • 29
  • Just the last item from each S3 batch? – Phil Jan 09 '17 at 03:47
  • @Phil The code is written that way so only newly uploaded files will be invalidated, not a whole bucket. In case of multi-files upload each of them will be invalidated separately. Works like a charm. – Kainax Jan 14 '17 at 04:57
  • The only reason this code works as expected is because S3 currently only included one item per notification, ie, the length of the array is happily always 1, and consequently, even if you upload multiple files in one go, you get an entirely new notification per file. You do not get a notification for the whole bucket in any case. None-the-less, this code as written is not ready should AWS change that behaviour. Far safer to write code that handles the whole array, regardless of length, which was my original (sadly missed) point. – Phil Jan 15 '17 at 23:32
  • The only reason why AWS adding events handlers is... well... to handle events. Why would they remove it? No matter how a new file has been added, it should trigger event for API and that is how it works now and will keep working. I'm using AWS for 4 years and they never changed something so previous code stopped working. Even if they changing API, they changing it a new standalone version, but all previous versions are always remain supported. In that particular case I just don't believe that personal file event will ever be removed. It's probably already used by millions projects worldwide. – Kainax Jan 16 '17 at 04:53
  • In case if I misunderstand your first comment and you mean that **'Quantity': 1** will add only last item -- there is FOR loop for every item in array. – Kainax Jan 16 '17 at 05:02
  • The S3 notification from a PUT event includes an ARRAY of Record elements - arrays are of variable length, which is why to process them, one uses a loop, as you have done. But - your code is dependent on the array having exactly 1 element - it is equivalent to simply saying `path = "/" + event["Records"][0]["s3"]["object"]["key"]` (strictly, you will take the last item, not the first, but with an array of length 1, as is currently,, (but need not be), the case, this amounts to the same thing. Also, I never mentioned or suggested AWS removing any notifications. – Phil Jan 17 '17 at 08:03
10

With the Invalidation API, it does get updated in a few of minutes.
Check out PHP Invalidator.

anjanesh
  • 3,771
  • 7
  • 44
  • 58
  • This is exactly what I was looking for. I am going to hook this in Beanstalkapp's web-hooks when auto deploying from git! Thanks for the link! – cointilt Apr 28 '11 at 18:42
10

Bucket Explorer has a UI that makes this pretty easy now. Here's how:

Right click your bucket. Select "Manage Distributions."
Right click your distribution. Select "Get Cloudfront invalidation list" Then select "Create" to create a new invalidation list. Select the files to invalidate, and click "Invalidate." Wait 5-15 minutes.

Leopd
  • 41,333
  • 31
  • 129
  • 167
5

If you have boto installed (which is not just for python, but also installs a bunch of useful command line utilities), it offers a command line util specifically called cfadmin or 'cloud front admin' which offers the following functionality:

Usage: cfadmin [command]
cmd - Print help message, optionally about a specific function
help - Print help message, optionally about a specific function
invalidate - Create a cloudfront invalidation request
ls - List all distributions and streaming distributions

You invaliate things by running:

$sam# cfadmin invalidate <distribution> <path>
samuraisam
  • 1,927
  • 1
  • 20
  • 24
  • Actually cfadmin is a very helpful tool, especially if you need to reset CloudFront cache from the console\bash\travis ci deployment script. BTW here is the [post how to reset\invalidate CoudFront cache during the travis deployment to aws](http://www.mikitamanko.com/blog/2014/10/26/travis-invalidate-aws-cloudfront-cache/) – Mikita Manko Oct 26 '14 at 19:59
3

In ruby, using the fog gem

AWS_ACCESS_KEY = ENV['AWS_ACCESS_KEY_ID']
AWS_SECRET_KEY = ENV['AWS_SECRET_ACCESS_KEY']
AWS_DISTRIBUTION_ID = ENV['AWS_DISTRIBUTION_ID']

conn = Fog::CDN.new(
    :provider => 'AWS',
    :aws_access_key_id => AWS_ACCESS_KEY,
    :aws_secret_access_key => AWS_SECRET_KEY
)

images = ['/path/to/image1.jpg', '/path/to/another/image2.jpg']

conn.post_invalidation AWS_DISTRIBUTION_ID, images

even on invalidation, it still takes 5-10 minutes for the invalidation to process and refresh on all amazon edge servers

Fábio Batista
  • 25,002
  • 3
  • 56
  • 68
raycchan
  • 313
  • 1
  • 5
  • 9
3

one very easy way to do it is FOLDER versioning.

So if your static files are hundreds for example, simply put all of them into a folder called by year+versioning.

for example i use a folder called 2014_v1 where inside i have all my static files...

So inside my HTML i always put the reference to the folder. ( of course i have a PHP include where i have set the name of the folder. ) So by changing in 1 file it actually change in all my PHP files..

If i want a complete refresh, i simply rename the folder to 2014_v2 into my source and change inside the php include to 2014_v2

all HTML automatically change and ask the new path, cloudfront MISS cache and request it to the source.

Example: SOURCE.mydomain.com is my source, cloudfront.mydomain.com is CNAME to cloudfront distribution.

So the PHP called this file cloudfront.mydomain.com/2014_v1/javascript.js and when i want a full refresh, simply i rename folder into the source to "2014_v2" and i change the PHP include by setting the folder to "2014_v2".

Like this there is no delay for invalidation and NO COST !

This is my first post in stackoverflow, hope i did it well !

MarcoP
  • 71
  • 3
3

current AWS CLI support invalidation in preview mode. Run the following in your console once:

aws configure set preview.cloudfront true

I deploy my web project using npm. I have the following scripts in my package.json:

{
    "build.prod": "ng build --prod --aot",
    "aws.deploy": "aws s3 sync dist/ s3://www.mywebsite.com --delete --region us-east-1",
    "aws.invalidate": "aws cloudfront create-invalidation --distribution-id [MY_DISTRIBUTION_ID] --paths /*",
    "deploy": "npm run build.prod && npm run aws.deploy && npm run aws.invalidate"
}

Having the scripts above in place you can deploy your site with:

npm run deploy
Dmitry Efimenko
  • 10,973
  • 7
  • 62
  • 79
  • 1
    I think you need the asterisk in your 'aws.invalidate' command, change `--paths /` to `--paths /*`. mine was also like yours and it did not invalidate the distribution... – Herald Smit Jun 06 '18 at 17:46
3

Go to CloudFront.

Click on your ID/Distributions.

Click on Invalidations.

Click create Invalidation.

In the giant example box type * and click invalidate

Done

enter image description here

Jay
  • 111
  • 10
2

Set TTL=1 hour and replace

http://developer.amazonwebservices.com/connect/ann.jspa?annID=655

Hml
  • 41
  • 3
2

Just posting to inform anyone visiting this page (first result on 'Cloudfront File Refresh') that there is an easy-to-use+access online invalidator available at swook.net

This new invalidator is:

  • Fully online (no installation)
  • Available 24x7 (hosted by Google) and does not require any memberships.
  • There is history support, and path checking to let you invalidate your files with ease. (Often with just a few clicks after invalidating for the first time!)
  • It's also very secure, as you'll find out when reading its release post.

Full disclosure: I made this. Have fun!

DisgruntledGoat
  • 70,219
  • 68
  • 205
  • 290
swook
  • 478
  • 5
  • 7
  • 2
    sorry, but even "you say" the credentials not stored or leeked ... one should never give his credential to a 3rd party. May be implement a remote amazon authentication or something ? – d.raev Jan 09 '15 at 16:39
  • You should put this behind https at the least. – Oliver Tynes Oct 19 '15 at 12:50
  • Online tools are generally nice, but providing credentials to 3rd party tool will be a valid security concern. I would suggest to use either official web console or [official CLI tool](http://stackoverflow.com/a/34957651/728675). – RayLuo Jan 22 '16 at 23:34
  • 3
    For the security of others, I'm downvoting this answer. You should never ever ask people for their credentials – Moataz Elmasry Jun 16 '16 at 15:07
1

If you are using AWS, you probably also use its official CLI tool (sooner or later). AWS CLI version 1.9.12 or above supports invalidating a list of file names.

Full disclosure: I made this. Have fun!

RayLuo
  • 17,257
  • 6
  • 88
  • 73
  • Dead link - leads to a 404 :( and I can't update it as version 1.9.12 is missing from the release notes (https://aws.amazon.com/releasenotes/?tag=releasenotes%23keywords%23cli) – SlyDave Jan 07 '19 at 14:26
  • Dude, thtat was a version released almost 3 years ago. Try the latest version and the feature is likely still there. (Full disclosure: I do not work on AWS CLI anymore.) – RayLuo Jan 15 '19 at 21:02
  • oh I know, just found it odd that of all the releasenotes, only 1.9.12 doesn't exist :D (which is what I was getting at about not being able to update the link). The comment was more of a hint to anyone that found there way here, like I did and needed to find the releasenotes for AWS CLI. no harm, no foul. – SlyDave Jan 17 '19 at 15:48