8
  1. When I am using AmazonS3Client to upload file on amazon s3 file store.
  2. when I am trying to upload multiple files at a time it gives exceptions: but same file multiple threads.

I tried out client configure such as :

  1. connectionTimeout=50000 in ms
  2. maxConnections=500
  3. socketTimeout=50000 in ms

Exception stacktrace:

com.amazonaws.AmazonClientException: Data read has a different length than the expected: dataLength=8192; expectedLength=79352; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ResettableInputStream; markedSupported=true; marked=0; resetSinceLastMarked=false; markCount=1; resetCount=0
                    at com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:150)
                    at com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:110)
                    at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:73)
                    at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:151)
                    at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:73)
                    at org.apache.http.entity.InputStreamEntity.writeTo(InputStreamEntity.java:98)
                    at com.amazonaws.http.RepeatableInputStreamRequestEntity.writeTo(RepeatableInputStreamRequestEntity.java:153)
                    at org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:98)
                    at org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
                    at org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:122)
                    at org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:271)
                    at org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:197)
                    at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:257)
                    at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:47)
                    at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
            
                at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:713)
                    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:518)
                    at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
                    at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
                    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:647)
                    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:441)
                    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:292)
                    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3655)
                    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1424)
                    at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:135)
                    at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:127)
                    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:129)
                    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
                    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                    at java.lang.Thread.run(Thread.java:745)**
starball
  • 20,030
  • 7
  • 43
  • 238
Mangesh Bhapkar
  • 393
  • 1
  • 4
  • 8
  • I found solution for this problem : because I'm trying to send same file multiple times. That's why it gives error :because AmazonS3Client send file in multipart. – Mangesh Bhapkar Jan 15 '15 at 08:51

4 Answers4

8

This answer was wrote from the guy of AWS Hanson:

Is it possible that the input stream that is specified in the request has already been fully read?

If the input stream is a file stream, have you tried specifying the original file in the request instead of the input stream of the file?

lucasddaniel
  • 1,779
  • 22
  • 22
2

Improving @iucasddaniel answer with sample code.

AmazonS3Client putObject: No content length specified for stream data. Stream contents will be buffered in memory and could result in out of memory errors.

Solution « Specify Object Metadata content Length

File tempFile = "D://Test.mp4";
String bucketName = "YashFiles", filePath = "local/mp4/";

FileInputStream sampleStream = new FileInputStream( tempFile );
byte[] byteArray = IOUtils.toByteArray( sampleStream );
Long contentLength = Long.valueOf(byteArray.length);
sampleStream.close();

ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(contentLength);

TransferManager tm = new TransferManager(credentials);

FileInputStream stream = new FileInputStream( tempFile );
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, filePath, stream,objectMetadata);
Upload myUpload = tm.upload(putObjectRequest);
    if (myUpload.isDone() == false) {
        System.out.println("Transfer: "+ myUpload.getDescription());
        System.out.println("  - State: "+ myUpload.getState());
        System.out.println("  - Progress: "+ myUpload.getProgress().getBytesTransferred());
    }
myUpload.waitForCompletion();

tm.shutdownNow();
stream.close();

org.apache.commons.io.FileUtils.forceDelete( tempFile );

Amazon S3: Checking Key Exists and generating PresignedUrl

Community
  • 1
  • 1
Yash
  • 9,250
  • 2
  • 69
  • 74
  • 1
    That does not solve the problem. You still loaded the entire file into memory with IOUtils.toByteArray(). This just moves it into your code loading the entire file to memory rather than Amazon's code. – BrianC May 15 '19 at 17:43
1

I saw that error message when I was trying to do a S3.putObject(MyObject);

I had to update objectMetadata.setContentLength( [length of your content] );

For example:

String dataset= "Some value you want to add to S3 Bucket"; 
ObjectMetadata objectMetadata= new ObjectMetadata(); 
InputStream content= new ByteArrayInputStream(dataset.getBytes("UTF-8"));
objectMetadata.setContentLength(content.available()); 
objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYTION); 
Gene
  • 10,819
  • 1
  • 66
  • 58
  • 1
    That does not solve the problem. You still had to load the entire data set into memory to get the length. This just moves it into your code loading the entire data set to memory rather than Amazon's code. – BrianC May 15 '19 at 17:58
  • I don't follow. Aren't all objects loaded into memory when they are instantiated? – Gene May 16 '19 at 23:49
  • Yes, that is my point. That is why you can not load the data in an object. If you transfer it as an input or output stream it just passes through and does not get loaded. If you load the data into an object the entire file is loaded in to memory. – BrianC May 17 '19 at 18:22
  • What are you seeing in the debugger? I'm not sure why your comment relates to my answer being wrong. If it's wrong, I'll delete it. Can you email me what you are seeing or reference material regarding what you are talking about? My email is gc.genechuang@gmail.com – Gene May 18 '19 at 06:28
1
...
byte[] f = IOUtils.toByteArray(inputStream); // This reads all bytes of the input stream
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(f.length);
metadata.setContentType(contentType); //Content type of the uploaded file
metadata.setHeader("filename", fileName);
s3.putObject(bucketName, key, new ByteArrayInputStream(f), metadata); // Here we create a new ByteArrayInputStream so S3 client get happy.
viniciusalvess
  • 756
  • 8
  • 18