327

Is there any function to rename files and folders in Amazon S3? Any related suggestions are also welcome.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
Shanmugam
  • 3,452
  • 2
  • 14
  • 15

22 Answers22

615

I just tested this and it works:

aws s3 --recursive mv s3://<bucketname>/<folder_name_from> s3://<bucket>/<folder_name_to>
bennythejudge
  • 6,353
  • 2
  • 13
  • 11
  • 13
    Is it atomic? Will second (and the same) command fail while first is executing? – Alex B Jul 21 '16 at 17:17
  • 18
    No AWS does not have an atomic move operation – AP. Jan 17 '17 at 22:50
  • 6
    Thanks! Why do we need `--recursive` ? – Aziz Alto Dec 07 '17 at 13:37
  • 3
    @AzizAlto In case there is a deeper folder structure under `s3:///`, i.e. `s3:////some/deeper/folders`. – Ville Jan 08 '18 at 23:15
  • This seems to make a separate API query for every recursive object, at least returns "move" message for each of them. – abcdn Oct 03 '18 at 20:52
  • this did a `cp`. I had to delete the original after the copy was complete. `aws-cli/1.15.61 Python/2.7.13 Linux/4.4.0-17134-Microsoft botocore/1.10.60` – Felipe Alvarez Oct 10 '18 at 03:39
  • 5
    here is how it works https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html mv = copy + rm – Roman Ivasyshyn Jul 01 '19 at 10:17
  • Is `aws s3 --recursive mv` just a shorthand for copying to the target and deleting the source? Based on AP's comment above (2nd comment), I presume the answer is yes. – Don Smith Nov 21 '19 at 22:11
  • 2
    It's worth to say, that running this command should be *free of charge*. >> "Transfers between S3 buckets or from Amazon S3 to any service(s) within the same AWS Region are free." source: [Amazon S3 pricing - data transfer](https://aws.amazon.com/s3/pricing/) – tomasbedrich Dec 11 '19 at 16:20
  • I can't get this to work for renaming a single file. I get this error: '*ERROR: Parameter problem: Destination must be a directory and end with '/' when acting on a folder content or on multiple sources.*' – Smock Feb 05 '20 at 14:33
  • 3
    Unfortunately, s3 mv is not simply a metadata change, like Linux mv is. Because s3 mv is implemented like cp+rm, the operation will run long, consume high I/O, and will temporarily require double storage. – Brian Fitzgerald Jun 24 '20 at 18:04
  • remove --recursive and change the directory name to file name in case you want to rename a file in the same directory. like aws s3 mv s3:/// s3:/// – BeK Mar 12 '21 at 13:06
  • 1
    @tomasbedrich, Data Transfer within the same region will be free of charge, but not the COPY operation incurred by the `mv`. As suggested by Brian Fitzgerald, an `aws s3 mv` operation translates into two API calls: a COPY (for which you are billed) and a DELETE (which is free). – galeop Jul 27 '22 at 14:54
  • @AlexB, In order to move thousands of objects, I would use [S3 batch operations](https://aws.amazon.com/s3/faqs#S3_Batch_Operations) to perform the COPY, and then create a script that would read the batch report and perform DELETE operations on the objects that were successfully copied. In order to consume too much storage due to the COPY, I would perform those operation by subsets of objects, instead of all at once. – galeop Jul 27 '22 at 15:18
  • This answer can be misleading, it needs editing to make it clear this is not executed as a native file system rename, it is just a shorthand way of moving each file within the folder structure manually. Might seem pedantic but this can be very significant performance/ time impact on folders with a high number of objects. – AutoMattTick Sep 09 '22 at 14:45
118

There is no direct method to rename a file in S3. What you have to do is copy the existing file with a new name (just set the target key) and delete the old one:

@Autowired
private AmazonS3 s3Client;

public void rename(String fileKey, String newFileKey) {
    s3Client.copyObject(bucketName, fileKey, bucketName, newFileKey);
    s3Client.deleteObject(bucketName, fileKey);
}
Denis Abakumov
  • 355
  • 3
  • 11
Naaz Muhammadh
  • 1,892
  • 1
  • 11
  • 3
  • 161
    Provide an example with your answer, otherwise make a comment. – EternalHour Nov 08 '14 at 17:38
  • 69
    This is wrong answer. You can move files on S3 using mv. mv = rename – Nicolo Apr 12 '16 at 11:12
  • 27
    This is a wrong answer for two reasons: 1) you can use the GUI to right click and rename the file, and 2) as it's been mentioned before you can move the file with the move command or through a sdk. – Maximus May 31 '16 at 15:11
  • 1
    `cp` the files to give them new names, then `rm` the old files. You can do this recursively. See Thang Tran's answer below. http://stackoverflow.com/a/31753008/3345375 – jkdev Nov 09 '16 at 18:13
  • 24
    You cannot right click on a folder name to rename it on S3. – area51 Nov 29 '17 at 16:49
  • 37
    There is the `aws s3 mv` command, but it would appear behind the scenes it does copy and delete and NOT rename the object. This detail is important as copying causes costs per GB, while a simple rename would not. – user1129682 Aug 14 '18 at 14:41
  • 3
    This is no longer true as of a recent S3 UI update. From the web console you can now select an object, click the "Actions" dropdown, and click "Rename object". However, it simply creates a copy of and deletes the old file, causing the loss of metadata. – Kenny Worden Mar 23 '21 at 21:02
53
aws s3 cp s3://source_folder/ s3://destination_folder/ --recursive
aws s3 rm s3://source_folder --recursive
Glorfindel
  • 21,988
  • 13
  • 81
  • 109
Thang Tran
  • 629
  • 6
  • 4
29

You can use the AWS CLI commands to mv the files

Cavaz
  • 2,996
  • 24
  • 38
28

You can either use AWS CLI or s3cmd command to rename the files and folders in AWS S3 bucket.

Using S3cmd, use the following syntax to rename a folder,

s3cmd --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>

Using AWS CLI, use the following syntax to rename a folder,

aws s3 --recursive mv s3://<s3_bucketname>/<old_foldername>/ s3://<s3_bucketname>/<new_folder_name>
Basheer.O
  • 477
  • 5
  • 8
  • 1
    What if i need to rename all .csv files. how come i do? – LUZO Mar 22 '18 at 14:21
  • @LUZO You need to run bash script which will first get all the names of the files in the folder using `aws s3 ls` and then loop over each and run `aws s3 mv` each time. – Waleed93 Jan 30 '23 at 23:39
19

I've just got this working. You can use the AWS SDK for PHP like this:

use Aws\S3\S3Client;

$sourceBucket = '*** Your Source Bucket Name ***';
$sourceKeyname = '*** Your Source Object Key ***';
$targetBucket = '*** Your Target Bucket Name ***';
$targetKeyname = '*** Your Target Key Name ***';        

// Instantiate the client.
$s3 = S3Client::factory();

// Copy an object.
$s3->copyObject(array(
    'Bucket'     => $targetBucket,
    'Key'        => $targetKeyname,
    'CopySource' => "{$sourceBucket}/{$sourceKeyname}",
));

http://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectUsingPHP.html

Tom
  • 364
  • 1
  • 4
  • 13
14

This is now possible for Files, select the file then select Actions > Rename in the GUI.

To rename a folder, you instead have to create a new folder, and select the contents of the old one and copy/paste it across (Under "Actions" again)

Jethro
  • 3,029
  • 3
  • 27
  • 56
  • 1
    Note that you'll need to click on the bucket name, and child prefixes (and not the radio button) in case you want to choose a prefixed destination inside a bucket. – Antwan Aug 11 '20 at 13:25
  • 1
    Also note this is now called "Actions" and not "More". They are also available via context menu. – Antwan Aug 11 '20 at 13:25
13

We have 2 ways by which we can rename a file on AWS S3 storage -

1 .Using the CLI tool -

aws s3 --recursive mv s3://bucket-name/dirname/oldfile s3://bucket-name/dirname/newfile

2.Using SDK

$s3->copyObject(array(
'Bucket'     => $targetBucket,
'Key'        => $targetKeyname,
'CopySource' => "{$sourceBucket}/{$sourceKeyname}",));
P_O_I_S_O_N
  • 357
  • 5
  • 14
13

To rename a folder (which is technically a set of objects with a common prefix as key) you can use the aws CLI move command with --recursive option.

aws s3 mv s3://bucket/old_folder s3://bucket/new_folder --recursive
tuomastik
  • 4,559
  • 5
  • 36
  • 48
Alireza
  • 191
  • 1
  • 4
9

There is no way to rename a folder through the GUI, the fastest (and easiest if you like GUI) way to achieve this is to perform an plain old copy. To achieve this: create the new folder on S3 using the GUI, get to your old folder, select all, mark "copy" and then navigate to the new folder and choose "paste". When done, remove the old folder.

This simple method is very fast because it is copies from S3 to itself (no need to re-upload or anything like that) and it also maintains the permissions and metadata of the copied objects like you would expect.

orcaman
  • 6,263
  • 8
  • 54
  • 69
  • @Trisped In my testing just a moment ago, *files* may be renamed using the web GUI, but not *folders*. – rinogo Feb 17 '20 at 18:13
6

Here's how you do it in .NET, using S3 .NET SDK:

var client = new Amazon.S3.AmazonS3Client(_credentials, _config);
client.CopyObject(oldBucketName, oldfilepath, newBucketName, newFilePath);
client.DeleteObject(oldBucketName, oldfilepath);

P.S. try to use use "Async" versions of the client methods where possible, even though I haven't done so for readability

Alex from Jitbit
  • 53,710
  • 19
  • 160
  • 149
5

This works for renaming the file in the same folder

aws s3  mv s3://bucketname/folder_name1/test_original.csv s3://bucket/folder_name1/test_renamed.csv
Adiii
  • 54,482
  • 7
  • 145
  • 148
Tech Support
  • 948
  • 11
  • 9
4

Below is the code example to rename file on s3. My file was part-000* because of spark o/p file, then i copy it to another file name on same location and delete the part-000*:

import boto3
client = boto3.client('s3')
response = client.list_objects(
Bucket='lsph',
MaxKeys=10,
Prefix='03curated/DIM_DEMOGRAPHIC/',
Delimiter='/'
)
name = response["Contents"][0]["Key"]
copy_source = {'Bucket': 'lsph', 'Key': name}
client.copy_object(Bucket='lsph', CopySource=copy_source, 
Key='03curated/DIM_DEMOGRAPHIC/'+'DIM_DEMOGRAPHIC.json')
client.delete_object(Bucket='lsph', Key=name)
Vikas
  • 49
  • 2
2

File and folder are in fact objects in S3. You should use PUT OBJECT COPY to rename them. See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html

petertc
  • 3,607
  • 1
  • 31
  • 36
  • 1
    while some languages don't have an SDK offered by AWS, the big ones (`Python, Ruby, Java, C#, PHP, Node.js, Android, iOS, browser JavaScript`) do and there's no reason not to use them http://aws.amazon.com/tools/ – Don Cheadle Mar 04 '15 at 23:00
  • 3
    The Java SDK from AWS is too large (and monolithic) for client-side applications. – Jesse Barnum Oct 02 '15 at 14:11
2

rename all the *.csv.err files in the <<bucket>>/landing dir into *.csv files with s3cmd

 export aws_profile='foo-bar-aws-profile'
 while read -r f ; do tgt_fle=$(echo $f|perl -ne 's/^(.*).csv.err/$1.csv/g;print'); \
        echo s3cmd -c ~/.aws/s3cmd/$aws_profile.s3cfg mv $f $tgt_fle; \
 done < <(s3cmd -r -c ~/.aws/s3cmd/$aws_profile.s3cfg ls --acl-public --guess-mime-type \
        s3://$bucket | grep -i landing | grep csv.err | cut -d" " -f5)
Yordan Georgiev
  • 5,114
  • 1
  • 56
  • 53
1

As answered by Naaz direct renaming of s3 is not possible.

i have attached a code snippet which will copy all the contents

code is working just add your aws access key and secret key

here's what i did in code

-> copy the source folder contents(nested child and folders) and pasted in the destination folder

-> when the copying is complete, delete the source folder

package com.bighalf.doc.amazon;

import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.util.List;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3ObjectSummary;

public class Test {

public static boolean renameAwsFolder(String bucketName,String keyName,String newName) {
    boolean result = false;
    try {
        AmazonS3 s3client = getAmazonS3ClientObject();
        List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
        //some meta data to create empty folders start
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(0);
        InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
        //some meta data to create empty folders end

        //final location is the locaiton where the child folder contents of the existing folder should go
        String finalLocation = keyName.substring(0,keyName.lastIndexOf('/')+1)+newName;
        for (S3ObjectSummary file : fileList) {
            String key = file.getKey();
            //updating child folder location with the newlocation
            String destinationKeyName = key.replace(keyName,finalLocation);
            if(key.charAt(key.length()-1)=='/'){
                //if name ends with suffix (/) means its a folders
                PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, destinationKeyName, emptyContent, metadata);
                s3client.putObject(putObjectRequest);
            }else{
                //if name doesnot ends with suffix (/) means its a file
                CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName, 
                        file.getKey(), bucketName, destinationKeyName);
                s3client.copyObject(copyObjRequest);
            }
        }
        boolean isFodlerDeleted = deleteFolderFromAws(bucketName, keyName);
        return isFodlerDeleted;
    } catch (Exception e) {
        e.printStackTrace();
    }
    return result;
}

public static boolean deleteFolderFromAws(String bucketName, String keyName) {
    boolean result = false;
    try {
        AmazonS3 s3client = getAmazonS3ClientObject();
        //deleting folder children
        List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
        for (S3ObjectSummary file : fileList) {
            s3client.deleteObject(bucketName, file.getKey());
        }
        //deleting actual passed folder
        s3client.deleteObject(bucketName, keyName);
        result = true;
    } catch (Exception e) {
        e.printStackTrace();
    }
    return result;
}

public static void main(String[] args) {
    intializeAmazonObjects();
    boolean result = renameAwsFolder(bucketName, keyName, newName);
    System.out.println(result);
}

private static AWSCredentials credentials = null;
private static AmazonS3 amazonS3Client = null;
private static final String ACCESS_KEY = "";
private static final String SECRET_ACCESS_KEY = "";
private static final String bucketName = "";
private static final String keyName = "";
//renaming folder c to x from key name
private static final String newName = "";

public static void intializeAmazonObjects() {
    credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_ACCESS_KEY);
    amazonS3Client = new AmazonS3Client(credentials);
}

public static AmazonS3 getAmazonS3ClientObject() {
    return amazonS3Client;
}

}

Mateen
  • 1,631
  • 1
  • 23
  • 27
  • 2
    Please consider edit your code as this implementation doesn't return all content, as you implied, 'cause when you call the listObjects(bucketName, keyName), it returns at most 1000 items, you should call ObjectListing.isTruncated() method to know if a new request call is necessary. Consider this as a reference https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingObjectKeysUsingJava.html – le0diaz Nov 02 '18 at 14:25
  • This works but I also agree with the comment above. Just replace List fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries(); with ObjectListing objectListing = s3.listObjects(bucketName, keyName); List fileList = new ArrayList<>(); fileList.addAll(objectListing.getObjectSummaries()); while (objectListing.isTruncated()) { objectListing = s3.listNextBatchOfObjects(objectListing); fileList.addAll(objectListing.getObjectSummaries()); } – Karthik Mar 15 '19 at 03:34
1

In the AWS console, if you navigate to S3, you will see your folders listed. If you navigate to the folder, you will see the object (s) listed. right click and you can rename. OR, you can check the box in front of your object, then from the pull down menu named ACTIONS, you can select rename. Just worked for me, 3-31-2019

brasofilo
  • 25,496
  • 15
  • 91
  • 179
1082E1984
  • 67
  • 1
  • 4
  • "rename" is greyed out for me for folders, and the internet is full of questions like "why is rename greyed out for folders in S3 browser?" – Steve Jan 28 '20 at 15:21
  • I have the same question. We have a folder within a bucket that we'd like to rename. But when I select the folder, "Actions > Rename object" is greyed out. – commadelimited Feb 16 '21 at 20:42
1

If you want to rename a lot of files from an s3 folder you can run the following script.

    FILES=$(aws s3api list-objects --bucket your_bucket --prefix 'your_path' --delimiter '/'  | jq -r '.Contents[] | select(.Size > 0) | .Key' | sed '<your_rename_here>')
     for i in $FILES
     do
      aws s3 mv s3://<your_bucket>/${i}.gz s3://<your_bucket>/${i}
     done   
1

What I did is create a new folder and move older files object to the new folder.

Deepak Poojari
  • 688
  • 1
  • 6
  • 21
0

There are a lot of 'issues' with folder structures in s3 it seems as the storage is flat.

I have a Django project where I needed the ability to rename a folder but still keep the directory structure in-tact, meaning empty folders would need to be copied and stored in the renamed directory as well.

aws cli is great but neither cp or sync or mv copied empty folders (i.e. files ending in '/') over to the new folder location, so I used a mixture of boto3 and the aws cli to accomplish the task.

More or less I find all folders in the renamed directory and then use boto3 to put them in the new location, then I cp the data with aws cli and finally remove it.

import threading

import os
from django.conf import settings
from django.contrib import messages
from django.core.files.storage import default_storage
from django.shortcuts import redirect
from django.urls import reverse

def rename_folder(request, client_url):
    """
    :param request:
    :param client_url:
    :return:
    """
    current_property = request.session.get('property')
    if request.POST:
        # name the change
        new_name = request.POST['name']
        # old full path with www.[].com?
        old_path = request.POST['old_path']
        # remove the query string
        old_path = ''.join(old_path.split('?')[0])
        # remove the .com prefix item so we have the path in the storage
        old_path = ''.join(old_path.split('.com/')[-1])
        # remove empty values, this will happen at end due to these being folders
        old_path_list = [x for x in old_path.split('/') if x != '']

        # remove the last folder element with split()
        base_path = '/'.join(old_path_list[:-1])
        # # now build the new path
        new_path = base_path + f'/{new_name}/'
        # remove empty variables
        # print(old_path_list[:-1], old_path.split('/'), old_path, base_path, new_path)
        endpoint = settings.AWS_S3_ENDPOINT_URL
        # # recursively add the files
        copy_command = f"aws s3 --endpoint={endpoint} cp s3://{old_path} s3://{new_path} --recursive"
        remove_command = f"aws s3 --endpoint={endpoint} rm s3://{old_path} --recursive"
        
        # get_creds() is nothing special it simply returns the elements needed via boto3
        client, resource, bucket, resource_bucket = get_creds()
        path_viewing = f'{"/".join(old_path.split("/")[1:])}'
        directory_content = default_storage.listdir(path_viewing)

        # loop over folders and add them by default, aws cli does not copy empty ones
        # so this is used to accommodate
        folders, files = directory_content
        for folder in folders:
            new_key = new_path+folder+'/'
            # we must remove bucket name for this to work
            new_key = new_key.split(f"{bucket}/")[-1]
            # push this to new thread
            threading.Thread(target=put_object, args=(client, bucket, new_key,)).start()
            print(f'{new_key} added')

        # # run command, which will copy all data
        os.system(copy_command)
        print('Copy Done...')
        os.system(remove_command)
        print('Remove Done...')

        # print(bucket)
        print(f'Folder renamed.')
        messages.success(request, f'Folder Renamed to: {new_name}')

    return redirect(request.META.get('HTTP_REFERER', f"{reverse('home', args=[client_url])}"))

ViaTech
  • 2,143
  • 1
  • 16
  • 51
-1

S3DirectoryInfo has a MoveTo method that will move one directory into another directory, such that the moved directory will become a subdirectory of the other directory with the same name as it originally had.

The extension method below will move one directory to another directory, i.e. the moved directory will become the other directory. What it actually does is create the new directory, move all the contents of the old directory into it, and then delete the old one.

public static class S3DirectoryInfoExtensions
{
    public static S3DirectoryInfo Move(this S3DirectoryInfo fromDir, S3DirectoryInfo toDir)
    {
        if (toDir.Exists)
            throw new ArgumentException("Destination for Rename operation already exists", "toDir");
        toDir.Create();
        foreach (var d in fromDir.EnumerateDirectories())
            d.MoveTo(toDir);
        foreach (var f in fromDir.EnumerateFiles())
            f.MoveTo(toDir);
        fromDir.Delete();
        return toDir;
    }
}
HansA
  • 1,375
  • 2
  • 11
  • 19
-1

There is one software where you can play with the s3 bucket for performing different kinds of operation.

Software Name: S3 Browser

S3 Browser is a freeware Windows client for Amazon S3 and Amazon CloudFront. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. Amazon CloudFront is a content delivery network (CDN). It can be used to deliver your files using a global network of edge locations.


If it's only single time then you can use the command line to perform these operations:

(1) Rename the folder in the same bucket:

s3cmd --access_key={access_key} --secret_key={secret_key} mv s3://bucket/folder1/* s3://bucket/folder2/

(2) Rename the Bucket:

s3cmd --access_key={access_key} --secret_key={secret_key} mv s3://bucket1/folder/* s3://bucket2/folder/

Where,

{access_key} = Your valid access key for s3 client

{secret_key} = Your valid scret key for s3 client

It's working fine without any problem.

Thanks

Radadiya Nikunj
  • 988
  • 11
  • 10