4

I'm trying to upload a file on AWS S3 by using Java-AWS API. The problem is my application is unable to upload large sized files because the heap is reaching its limit. Error: java.lang.OutOfMemoryError: Java heap space

I personally think extending heap memory isn't a permanent solution because I have to upload file upto 100 gb. What should I do ?

Here is the code snippet:

        BasicAWSCredentials awsCreds = new BasicAWSCredentials(AID, Akey);
        AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
        .withRegion(Regions.fromName("us-east-2"))
        .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
        .build();

        InputStream Is=file.getInputStream();

        boolean buckflag = s3Client.doesBucketExist(ABuck);
        if(buckflag != true){
           s3Client.createBucket(ABuck);
        }
        s3Client.putObject(new PutObjectRequest(ABuck, AFkey,file.getInputStream(),new ObjectMetadata() ).withCannedAcl(CannedAccessControlList.PublicRead));
Wicky Memon
  • 79
  • 2
  • 7
  • 2
    Do you `ObjectMetadta.setContentLength(fileLength)`, when outOfMem occurs? [from javadoc](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/PutObjectRequest.html): "..If not provided, the library will have to **buffer** the contents of the input stream in order to calculate it." – xerx593 Jan 26 '19 at 15:08
  • 1
    Please add your code to the question. Otherwise we can only guess but not really help you. – Codo Jan 26 '19 at 15:37
  • I've added the code. @Codo – Wicky Memon Feb 02 '19 at 17:40
  • @xerx593 I've not provided the file length. What changes should I make into the code? – Wicky Memon Feb 02 '19 at 17:42
  • Please have a look at my answer https://stackoverflow.com/a/64263423/1704634 , it uses an S3OutputStream which automatically switches to multipart uploads in case the stream is too large. Currently uses a 10MB buffer, but this can be configured smaller/larger. – blagerweij Oct 08 '20 at 15:43

2 Answers2

10

I strongly recommend to setContentLength() on ObjectMetadata, since:

..If not provided, the library will have to buffer the contents of the input stream in order to calculate it.

(..which predictably will lead to OutOfMemory on "sufficient large" files.)

source: PutObjectRequest javadoc

Applied to your code:

 // ...
 ObjectMetadata omd = new ObjectMetadata();
 // a tiny code line, but with a "huge" information gain and memory saving!;)
 omd.setContentLength(file.length());

 s3Client.putObject(new PutObjectRequest(ABuck, AFkey, file.getInputStream(), omd).withCannedAcl(CannedAccessControlList.PublicRead));
 // ...
xerx593
  • 12,237
  • 5
  • 33
  • 64
  • Hmm I have exactly the same error while setting the content length. – Aleksander Lech Apr 29 '21 at 07:28
  • @AleksanderLech : Ok, this (is good, it) means you can exclude "this error", and the problem is "somewhere else in your code". The bad about "OOM": it is hard to track/ce! :( ... https://stackoverflow.com/q/37335/592355 – xerx593 Apr 29 '21 at 09:58
  • "intuitive guess": you "bump" the (x-large) file somewhere else "(completely or sufficiently large parts) into memory"! ... (What do you do with `file` before/after s3 ?) – xerx593 Apr 29 '21 at 09:59
0

You need to add example code to get a proper answer. If you are dealing with a large object, use TransferManager to upload rather than doing putObject.

vavasthi
  • 922
  • 5
  • 14