After a lot of searching, I learned that Bucketeer
does give bucket control. You just have to use AWS CLI
.
Here is the link to AWS
docs on CLI
:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
And here is the link where Bucketeer
tells you how to get started with that on their service:
https://devcenter.heroku.com/articles/bucketeer#using-with-the-aws-cli
This means you can install AWS CLI
, do the aws configure
with the credentials Bucketeer
provides, and then go on to change cache-control
in the bucket directly.
AWS
does not seem to have a feature for setting cache-control
defaults for an entire bucket or folder, so you actually do it to each object.
In my case, all of my files/objects in the bucket are images that I display on the website and need to cache, so it's safe to run a command that does it all at once.
Such a command can be found in this answer:
How to set expires headers to all images in a bucket in Amazon S3
For me, it looked like this:
aws s3 cp s3://my-bucket-name s3://my-bucket-name --recursive --acl public-read --metadata-directive REPLACE --cache-control max-age=43200000
The command basically copies the entire bucket onto itself while adding the cache-control max-age=43200000
header to each object in the process.
This works for all existing files, but will not change anything for future changes or additions. You'd have to run this again every so often to catch new stuff and/or write code to set your object headers when saving the object to the bucket. Apparently there are people that have had luck with this. Not me.
Thankfully, I found this post:
https://www.neontsunami.com/posts/caching-variants-with-activestorage
This monkey-patch basically changes ActiveStorage::RepresentationsController#show
to use Rails
action caching for variants. Take a look. If you're having similar issues, it's worth the read.
There are drawbacks. For my case, they were not a problem, so this is the solution I went with.