I'm wondering whether there is an easy way to permanently restore Glacier objects to S3. It seems that you can restore Glacier objects for the certain amount of time you provide when restoring to S3. So for example, we have now thousands of files restored to S3 that will get back to Glacier in 90 days but we do not want them back in Glacier.
2 Answers
To clarify a technicality on one point, your files will not "go back to" Glacier in 90 days -- because they are still in Glacier, but since you have done a restore, there are temporary copies living in S3 reduced redundancy storage (RRS) that S3 will delete in 90 days (or whatever day value you specified when you did the restore operation. Restoring files doesn't remove the Glacier copy.
The answer to your question is no, and yes.
You cannot technically change an object from the Glacier storage class back to the standard or RRS class...
The transition of objects to the GLACIER storage class is one-way.You cannot use a lifecycle configuration rule to convert the storage class of an object from GLACIER to Standard or RRS.
... however...
If you want to change the storage class of an already archived object to either Standard or RRS, you must use the restore operation to make a temporary copy first. Then use the copy operation to overwrite the object as a Standard or RRS object.
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
You can copy that object to what is, technically, a new object, but one that has the same key (path) as the new object... so for practical purposes, yes, you can.
The PUT/COPY action is discussed here: http://docs.aws.amazon.com/AmazonS3/latest/dev/ChgStoClsOfObj.html

- 1
- 1

- 169,571
- 25
- 353
- 427
First, restore from Glacier (as you have done). This makes the file available so that you can copy it.
Then, once the file is available, you can copy/overwrite it using the AWS CLI:
aws s3 cp --metadata-directive "COPY" --storage-class "STANDARD" s3://my-bucket/my-image.png s3://my-bucket/my-image.png
Notes
In the above command:
- The
from
and theto
file paths are the same (we are overwriting it). - We are setting
--metadata-directive "COPY"
. This tellscp
to copy the metadata along with the file contents (documentation here). - We are setting the
--storage-class "STANDARD"
. This tellscp
to use theSTANDARD
s3 storage class for the new file (documentation here). - The result is a new file, this will update the modified date.
- If you are using versioning, you may need to make additional considerations.
This procedure is based on the info from the AWS docs here.
Bulk
If you want to do it in bulk (over many files/objects), you can use the below commands:
Dry Run
This command will list the Glacier files at the passed bucket and prefix:
aws s3api list-objects --bucket my-bucket --prefix some/path --query 'Contents[?StorageClass==`GLACIER`][Key]' --output text | xargs -I {} echo 'Would be copying {} to {}'
Bulk Upgrade
Once you are comfortable with the list of files that will be upgraded, run the below command to upgrade them.
Before running, make sure that the bucket and prefix match what you were using in the dry run. Also make sure that you've already run the standard S3/Glacier "restore" operation on all of the files (as described above).
This combines the single file/object upgrade command with the list-objects command in the dry run using xargs
.
aws s3api list-objects --bucket my-bucket --prefix some/path --query 'Contents[?StorageClass==`GLACIER`][Key]' --output text | xargs -I {} aws s3 cp --metadata-directive "COPY" --storage-class "STANDARD" s3://adplugg/{} s3://adplugg/{}

- 2,210
- 17
- 15
-
1Somehow nine years on I still had this problem. This answer came eight years and three months late (also likely this feature was not available nine years ago). This should be the chosen answer in 2022. – Arman Sep 15 '22 at 13:02
-
@Arman this is exactly the same solution provided in my original answer in 2014: *"If you want to change the storage class of an already archived object to either Standard or RRS, you must use the restore operation to make a temporary copy first. Then use the copy operation to overwrite the object as a Standard or RRS object."* That last part is what `aws s3 cp --metadata-directive "COPY" --storage-class "STANDARD" ...` is doing. – Michael - sqlbot Sep 15 '22 at 18:21