1

I am saving files on my S3 Bucket, but I noticed that to do this I use a FileOutPutStream like so :

private UploadedFile file; // This is from PrimeFaces, the file that the client wishes to upload
File uploadedFile = new File(file.getFileName()); // Leaving the file like this creates the file on my IDE folder AFTER executing the next two lines, thats why I though the next lines were an error.

FileOutputStream fileOutput = new FileOutputStream(uploadedFile);
fileOutput.write(file.getContents());

So this lines of code are responsible of writing the file on my device, I first though that was an error or that wasn't necessary because I don't know much about file uploading to amazon, so I remove this two lines because I notice my method for uploading just needed the file and the filename like so:

businessDelegatorView.uploadPublicRead("mybucketname", fileName, fileToUpload);

So I though this wasn't necessary and that was only duplicating the files:

FileOutputStream fileOutput = new FileOutputStream(uploadedFile);
fileOutput.write(file.getContents());

But I notice that the upload doesn't work if I remove them because it throws a FileNotFoundException so I started my search and find out this post from BalusC and I get it I have to define a path where the files from my clients will be saved for later upload like in this case to amazon s3 bucket, but I was wondering if, for example doing it like this will work when the .WAR is generated:

File uplodadFile = new File("C:/xampp/apache/conf", file.getFileName());

FileOutputStream fileOutput = new FileOutputStream(uploadFile);
fileOutput.write(file.getContents());

I am saving the files there as a test, but I don't know or not sure if FileOutPutStream is the right choice, I don't know another way.

Also this is what the method for uploading looks like after the above code has executed, without the FileOutPutStream it won't work cause file not in my device

AmazonS3 amazonS3 = buildAmazonS3(); 
         try {
             amazonS3.putObject(new PutObjectRequest(bucketName, key, file).withCannedAcl(CannedAccessControlList.PublicRead));

Just want somebody to clear things a little bit more for me, like what is the best path to put on here?

File uplodadFile = new File("C:/xampp/apache/conf", file.getFileName());

or it really doesn't matter I just gotta keep in mind in which machine the .WAR will be deployed? thanks

davidxxx
  • 125,838
  • 23
  • 214
  • 215
BugsForBreakfast
  • 712
  • 10
  • 30
  • 1
    Why write the content of your UploadedFile to the disk just to read it back from the disk and send it to S3? Just send the content to S3 directly. See https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#putObject-java.lang.String-java.lang.String-java.io.InputStream-com.amazonaws.services.s3.model.ObjectMetadata- which takes an InputStream as argument, and https://www.primefaces.org/docs/api/6.1/org/primefaces/model/UploadedFile.html#getInputstream--, which allows getting an InputStream from your UploadedFile. – JB Nizet Jul 21 '19 at 07:02

1 Answers1

3

Just want somebody to clear things a little bit more for me, like what is the best path to put on here?

When you want to upload a file such as into a system, keep it as a stream of bytes as long as possible because you receive bytes in entry and you want to store these same bytes at the end. Conversion bytes->file->bytes are time consuming, resource consuming and error prone (encoding conversion and file stored on a filesystem may be indeed sources of error).

So I though this wasn't necessary and that was only duplicating the files:

FileOutputStream fileOutput = new FileOutputStream(uploadedFile); fileOutput.write(file.getContents());

You are right because the file was already uploaded by the client HTTP request.
Doing it twice looks helpless.
But there you don't have a File but an UploadedFile (primefaces).

The PutObjectRequest() constructor from the S3 API has several overloads.
Actually you use it :

public PutObjectRequest(String bucketName,
                        String key,
                        File file)

The last parameter is a File. Do you see the mismatch ? In your first code that annoys you, you solved the issue (passing a File while you have a UploadedFile as source) by writing the content of the UploadedFile into a new File and that is acceptable if you need a File.
But in fact you don't need a File because the PutObjectRequest() constructor has another overload that matches better to your use :

public PutObjectRequest(String bucketName,
                        String key,
                        InputStream input,
                        ObjectMetadata metadata)

Constructs a new PutObjectRequest object to upload a stream of data to the specified bucket and key. After constructing the request, users may optionally specify object metadata or a canned ACL as well.

Note that to not hurt performance, providing the content length matters :

Content length for the data stream must be specified in the object metadata parameter; Amazon S3 requires it be passed in before the data is uploaded. Failure to specify a content length will cause the entire contents of the input stream to be buffered locally in memory so that the content length can be calculated, which can result in negative performance problems.

So you could just do that :

UploadedFile file = ...; // uploaded by client
ObjectMetadata metaData = new ObjectMetadata();
metaData.setContentLength(file.getSize());
amazonS3.putObject(new PutObjectRequest(bucketName, key, file.getInputStream(), metaData)
        .withCannedAcl(CannedAccessControlList.PublicRead));
davidxxx
  • 125,838
  • 23
  • 214
  • 215