Is it possible to delete a folder(In S3 bucket) and all its content with a single api request using java sdk for aws. For browser console we can delete and folder and its content with a single click and I hope that same behavior should be available using the APIs also.
7 Answers
There is no such thing as folders in S3. There are simply files (objects) with slashes in the filenames (keys).
The S3 browser console will visualize these slashes as folders, but they're not real.
You can delete all files with the same prefix, but first you need to look them up with list_objects(), then you can batch delete them.
For code snippet using Java SDK, please refer to Deleting multiple objects.
-
3Thank you for your reply, I am already using the listObject and batch delete thing. It is a lengthy process to fetch and delete :( – Munish Dhiman Feb 24 '17 at 15:36
-
1@MunishDhiman this is the only way, which you have , as clearly mentioned in AWS doc and in my answer , S3 doesn't have concept of folders and you have to delete each and every object yourself. batch call will be optimized and helpful for bulk delete. – Amit Feb 24 '17 at 16:27
-
1I love seeing the tried and true comment "There is no such thing as folders in S3" especially when I'm remind that AWS api designers also get confused by that https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-transfermanager.html#tranfermanager-download-directory – Austin Poole Jul 14 '21 at 14:59
-
1The one thing to note here is that you have to batch since list_objects returns partial results when getting more than 1000 objects (best check the API docs how to deal with the marker options but it's basically paging) – DoNuT Sep 06 '22 at 08:39
You can specify keyPrefix in ListObjectsRequest.
For example, consider a bucket that contains the following keys:
- foo/bar/baz
- foo/bar/bash
- foo/bar/bang
- foo/boo
And you want to delete files from foo/bar/baz.
if (s3Client.doesBucketExist(bucketName)) {
ListObjectsRequest listObjectsRequest = new ListObjectsRequest()
.withBucketName(bucketName)
.withPrefix("foo/bar/baz");
ObjectListing objectListing = s3Client.listObjects(listObjectsRequest);
while (true) {
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
s3Client.deleteObject(bucketName, objectSummary.getKey());
}
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}
}

- 2,322
- 9
- 33
- 55
-
3how do you delete the folder baz after the files inside have been deleted? Do you use `s3Client.deleteObject(bucketName, "foo/bar/baz");` ? – Maurice Jul 08 '18 at 12:40
-
3be careful this may take forever as you're sending a single request per object, better use `DeleteObjectsRequest` if you have lot of files. – bachr Aug 12 '20 at 22:45
-
1This approach could be very dangerous regarding data loss. There is no folder concept in S3 but it's very easy to organize objects that way. In this case if you have two object with the following keys: "foo/bar/baz/1.doc" and "foo/bar/bazzer/2.doc" the code above will delete both object causing unwanted behaviour in most of the cases. I recommend adding a trailing slash to the end to avoid this issue. – shawnest Jul 28 '22 at 13:22
There is no option of giving a folder name or more specifically prefix in java sdk to delete files. But there is an option of giving array of keys you want to delete. Click for details . By using this, I have written a small method to delete all files corresponding to a prefix.
private AmazonS3 s3client = <Your s3 client>;
private String bucketName = <your bucket name, can be signed or unsigned>;
public void deleteDirectory(String prefix) {
ObjectListing objectList = this.s3client.listObjects( this.bucketName, prefix );
List<S3ObjectSummary> objectSummeryList = objectList.getObjectSummaries();
String[] keysList = new String[ objectSummeryList.size() ];
int count = 0;
for( S3ObjectSummary summery : objectSummeryList ) {
keysList[count++] = summery.getKey();
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest( bucketName ).withKeys( keysList );
this.s3client.deleteObjects(deleteObjectsRequest);
}

- 140
- 1
- 7
-
2works well if your objects are within the limits of a page. otherwise need to all `isTruncated()` on the result / implement pagination. – Joe Jan 12 '21 at 10:04
-
I would also suggest to check the size of objectSummeryList before trying to call DeleteObjectsRequest. – Cássio Morales Nov 29 '21 at 14:51
-
You can try the below methods, it will handle deletion even for truncated pages, and also it will recursively delete all the contents in the given directory:
public Set<String> listS3DirFiles(String bucket, String dirPrefix) {
ListObjectsV2Request s3FileReq = new ListObjectsV2Request()
.withBucketName(bucket)
.withPrefix(dirPrefix)
.withDelimiter("/");
Set<String> filesList = new HashSet<>();
ListObjectsV2Result objectsListing;
try {
do {
objectsListing = amazonS3.listObjectsV2(s3FileReq);
objectsListing.getCommonPrefixes().forEach(folderPrefix -> {
filesList.add(folderPrefix);
Set<String> tempPrefix = listS3DirFiles(bucket, folderPrefix);
filesList.addAll(tempPrefix);
});
for (S3ObjectSummary summary: objectsListing.getObjectSummaries()) {
filesList.add(summary.getKey());
}
s3FileReq.setContinuationToken(objectsListing.getNextContinuationToken());
} while(objectsListing.isTruncated());
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return filesList;
}
public boolean deleteDirectoryContents(String bucket, String directoryPrefix) {
Set<String> keysSet = listS3DirFiles(bucket, directoryPrefix);
if (keysSet.isEmpty()) {
System.out.println("Given directory {} doesn't have any file "+ directoryPrefix);
return false;
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(bucket)
.withKeys(keysSet.toArray(new String[0]));
try {
amazonS3.deleteObjects(deleteObjectsRequest);
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return true;
}

- 456
- 8
- 20
First you need to fetch all object keys starting with the given prefix:
public List<FileKey> list(String keyPrefix) {
var objectListing = client.listObjects("bucket-name", keyPrefix);
var paths =
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.collect(Collectors.toList());
while (objectListing.isTruncated()) {
objectListing = client.listNextBatchOfObjects(objectListing);
paths.addAll(
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.toList());
}
return paths.stream().sorted().collect(Collectors.toList());
}
Then call deleteObjects:
client.deleteObjects(new DeleteObjectsRequest("bucket-name").withKeys(list("some-prefix")));

- 53
- 1
- 5
The answers here seem to be using an older version of the SDK.
Here's a solution for the V2 AWS S3 Java SDK:
It uses ListObjectsV2Iterable
to send DeleteObjectsRequest
within the 1000 key limit.
Also checks for empty list (possible response from list objects, but not accepted by delete objects).
void deleteFolder(String bucket, String prefix) {
ListObjectsV2Request listRequest =
ListObjectsV2Request.builder()
.bucket(bucket)
.prefix(prefix)
.build();
ListObjectsV2Iterable paginatedListResponse = s3Client.listObjectsV2Paginator(listRequest);
for (ListObjectsV2Response listResponse : paginatedListResponse) {
List<ObjectIdentifier> objects =
listResponse.contents().stream()
.map(s3Object -> ObjectIdentifier.builder().key(s3Object.key()).build())
.toList();
if (objects.isEmpty()) {
break;
}
DeleteObjectsRequest deleteRequest =
DeleteObjectsRequest.builder()
.bucket(bucket)
.delete(Delete.builder().objects(objects).build())
.build();
s3Client.deleteObjects(deleteRequest);
}
}
Where s3Client
is your instance of S3Client
.

- 1,429
- 16
- 22
You can try this
void deleteS3Folder(String bucketName, String folderPath) {
for (S3ObjectSummary file : s3.listObjects(bucketName, folderPath).getObjectSummaries()){
s3.deleteObject(bucketName, file.getKey());
}
}

- 183
- 1
- 8