2

I decided to utilize Amazon S3 to upload files, however I find AWS Docs a bit confusing about S3 capabilities for iOS platform.

I would like to know how my app would act in the following scenarios:

Scenario 1: During the upload user has accidentally lost internet connection

Scenario 2: App crashes during the upload

I heard that iOS SDK takes care of such issues itself by resuming remaining upload when possible, I failed to find relevant information in docs, though.

Will AWSS3 framework cover both this scenarios? Does it need any additional lines od code to not be vulnerable for potential crashes and network errors?

I've found some relevant informations for Android platform

I'd love to know what can I expect from the following code

let image = UIImage(named: "12.jpeg")
let fileManager = FileManager.default
let imageData = UIImageJPEGRepresentation(image!, 0.99)
let path = (NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString).appendingPathComponent("\(imageData!).jpeg")

fileManager.createFile(atPath: path as String, contents: imageData, attributes: nil)

let fileUrl = NSURL(fileURLWithPath: path)
let uploadRequest = AWSS3TransferManagerUploadRequest()
uploadRequest?.bucket = "bucketname"
uploadRequest?.key = "folder/12.jpeg"
uploadRequest?.contentType = "image/jpeg"
uploadRequest?.body = fileUrl as URL!
uploadRequest?.serverSideEncryption = AWSS3ServerSideEncryption.awsKms
uploadRequest?.uploadProgress = { (bytesSent, totalBytesSent, totalBytesExpectedToSend) -> Void in
    DispatchQueue.main.async(execute: {

        print("bytes sent \(bytesSent), total bytes sent \(totalBytesSent), of total \(totalBytesExpectedToSend)")

    })
}

let transferManager = AWSS3TransferManager.default()
transferManager?.upload(uploadRequest).continue(with: AWSExecutor.mainThread(), withSuccessBlock: { (taskk: AWSTask) -> Any? in
    if taskk.error != nil {
        // Error.
    } else {
        // Do something with your result.
    }
    return nil
})

Is it already crash/network proof?

EDIT:

This is the part of docs that sounds ambiguous to me:

S3 provides a multipart upload feature that lets you upload a single object as a set of parts. Each part is a contiguous portion of the object's data, and the object parts are uploaded independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of the object are uploaded, S3 assembles these parts and creates the object.

Does that mean it has its own inherent mechanism to manage that? Let's say I kill app during uploading a file, when I relaunch it and start over the upload process , will it start with the last chunk where it left off before I killed the app?

theDC
  • 6,364
  • 10
  • 56
  • 98
  • Have you tried manually testing it? Try uploading a large video file or something that will take awhile. While it's processing put your phone in airplane mode. Wait a few minutes and turn airplane mode off and see if it resumes. – Pierce Jan 17 '17 at 17:37
  • I did exactly this experiment with airplane mode and it did not resume when network got back – theDC Jan 17 '17 at 17:38
  • I think you would benefit from reachability checks, look at this answer (may need conversion to Swift 3) http://stackoverflow.com/a/27310748/5378116 – Pierce Jan 17 '17 at 18:00
  • 1
    The code for this is open-source, so you can check specific implementation details there https://github.com/aws/aws-sdk-ios/tree/master/AWSS3 – donkon Jan 17 '17 at 19:17
  • i'm rather looking for a clear answer about capabilities, AWS does not even bother to convert it code to the latest Swift syntax, method signatures have changed and it's really difficult to work with it – theDC Jan 17 '17 at 21:23
  • @Pierce see my edited question – theDC Jan 17 '17 at 22:47
  • @DCDC - okay I understand what you're asking. I think that the multi-part upload function of AWS is not the same as what you're hoping. MPU is a feature of S3 and S3 IA, and also Glacier I believe that allows you to upload giant files in separate chunks. It doesn't say upload 40%, get interrupted, then upload the other 60%. In fact I think this feature was created for people UL'ing large files that would then lose their connection and have to start over. – Pierce Jan 17 '17 at 22:58
  • @Pierce So, it will start over in case of interruption right? It will not send only the missing chunks of data? – theDC Jan 17 '17 at 22:59
  • Yes I believe so, but to be honest I'm not totally sure. I've never actually used it. We just went over that slightly yesterday in my certification course. I will have to go back and read over my notes. – Pierce Jan 17 '17 at 23:00
  • @Pierce I'd appreciate if you could read over and clarify it :) – theDC Jan 17 '17 at 23:01
  • I will do that for you, but I may not be able to for another hour. – Pierce Jan 17 '17 at 23:02
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/133396/discussion-between-dcdc-and-pierce). – theDC Jan 17 '17 at 23:03

0 Answers0