AFAIK, the best you can do here is wrap it in a try catch:
try
{
...
}
catch (AmazonS3Exception e)
{
// implement rollback operation
...
}
catch (Exception e)
{
// no possible rollback operation, abort program ?
...
}
You can keep track of progress using an UploadDirectoryProgressEvent
. In the event of an error, if you want to clean up, you'd have to compare the progress, note the diffs, and take action as appropriate (e.g. by removing objects if you don't want to keep them in S3 and you want the entire operation to be atomic).
Pay special attention to the fact that:
var request = new TransferUtilityUploadDirectoryRequest
{
UploadFilesConcurrently = true,
};
Will have an impact on your rollback mechanism. Setting UploadFilesConcurrently
to true imply that UploadDirectoryProgressArgs
received in UploadDirectoryProgressEvent
have a null value for CurrentFile
:
In which case you can only implement rollback in the case where you can delete the full remote directory.
Note also the documentation on multi-part uploads:
If a multipart upload is interrupted, TransferUtility will attempt to abort the multipart upload. Under certain circumstances (network outage, power failure, etc.), TransferUtility will not be able to abort the multipart upload. In this case, in order to stop getting charged for the storage of uploaded parts, you should manually invoke TransferUtility.AbortMultipartUploads() to abort the incomplete multipart uploads.
The documentation has examples of both tracking and aborting muultipart uploads.
As for your other question:
Does it makes sense to use UploadDirectory since atomic operations are at file (object) level ?
I'd say that depends. The code to upload an entire directory of files might be somewhat cleaner, but since you still have to possible track and clean up, you might as well process the files one by one.