0

In how to add cache control in AWS S3? someone said that

./s3cmd 
--recursive modify 
--add-header="Cache-Control:max-age=86400" 
s3://yourbucket/

as the correct command to add cache-control (and also expires) headers, but many people then said that it removed accessibility to the files (access denied).

Apparently it works if you run it folder by folder but that's not easy when you have thousands of folders.

What is the correct command to avoid this?

UPDATE 1: Could using --add-header='Cache-Control:max-age=86400, public' be a solution?

Community
  • 1
  • 1
user1996496
  • 972
  • 2
  • 10
  • 24

2 Answers2

1

You need to add --acl-public. Here's what the s3cmd usage page says about that:

-P, --acl-public      Store objects with ACL allowing read for anyone.

also:

Apparently it works if you run it folder by folder but that's not easy when you have thousands of folders.

I'm not sure what that has to do with this question.

UPDATE 1: Could using --add-header='Cache-Control:max-age=86400, public' be a solution?

The public only says "it's okay to cache this in public CDNs" (and such). That doesn't make the s3 files public.

tedder42
  • 23,519
  • 13
  • 86
  • 102
0

Here is a solution, you can modify the files by adding the expiry headers recursively.

The code

s3cmd --recursive modify --add-header="Cache-Control:public ,max-age=31536000" s3://your_bucket_name/

Your files won't get inaccessible while doing this.