Is there a way to run imagemagick or some other tool on s3 servers to resize the images. The way I know is first downloading all the image files on my machine and then convert these files and reupload them on s3 server. The problem is the number of file is more than 10000. I don't want to download all the files on my local machine. Is there a way to convert it on s3 server itself.
4 Answers
look at it: https://github.com/Turistforeningen/node-s3-uploader.
It is a library providing some features for s3 uploading including resizing as you want

- 2,167
- 4
- 20
- 27
-
I have the solution for the new files which are going to be uploaded. I want to convert the files already stored on the server. – sanjeev Nov 19 '15 at 05:53
-
So you can mount it is a local drive on your pc and edit the files with any bulk image editor. I think it's the easiest way to do it – Andrey Saleba Nov 19 '15 at 05:58
-
hai sanjeev, what solution do you have, I'm strucking with this issue of resizing and changing resolution of newly uploading image to S3 https://stackoverflow.com/questions/44416309/uploading-image-to-s3-using-multer-imager – anna poorani Jun 07 '17 at 16:31
Another option is NOT to change the resolution, but to use a service that can convert the images on-the-fly when they are accessed, such as:

- 241,921
- 22
- 380
- 470
Also check out the following article on amazon's compute blog.. I found myself here because i had the same question. I think i'm going to implement this in Lambda so i can just specify the size and see if that helps. My problem is i have image files on s3 that are 2MB.. i dont want them at full resolution because I have an app that is retrieving them and it takes a while sometimes for a phone to pull down a 2MB image. But i dont mind storing them at full resolution if i can get a different size just by specifying it in the URL. easy!

- 61
- 1
- 6
-
-
just trying to help.. all the content is in that post, why would i regurgitate it all here?. I dont post on stackoverflow hardly ever but wanted to help in this case.. sorry – Todd Staples Aug 01 '17 at 02:59
S3 does not, alone, enable arbitrary compute (such as resizing) on the data.
I would suggest looking into AWS-Lambda (available in the AWS console), which will allow you to setup a little program (which they call a Lambda) to run when certain events occur in a S3 bucket. You don't need to setup a VM, you only need to specify a few files, with a particular entry point. The program can be written in a few languages, namely node.js python and java. You'd be able to do it all from the console's web GUI.
Usually those are setup for computing things on new files being uploaded. To trigger the program for files that are already in place on S3, you have to "force" S3 to emit one of the events you can hook into for the files you already have. The list is here. Forcing a S3 copy might be sufficient (copy A to B, delete B), an S3 rename operation (rename A to A.tmp, rename A.tmp to A), and creation of new S3 objects would all work. You essentially just poke your existing files in a way that causes your Lambda to fire. You may also invoke your Lambda manually.
This example shows how to automatically generate a thumbnail out of an image on S3, which you could adapt to your resizing needs and reuse to create your Lambda:
Also, here is the walkthrough on how to configure your lambda with certain S3 events:
http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser.html

- 4,143
- 2
- 23
- 53
-
he wants to convert the files already stored on the server NOT the new files which are going to be uploaded – Ryan Nghiem Sep 04 '17 at 04:15
-
S3 blobs can be overwritten, but cannot be edited in place. Some form of download out of S3 is needed to convert, followed by an upload back into S3. I suggest triggering a lambda for the files that are already there (which avoids downloading out of the cloud), e.g. triggered via a HEAD request on each object already in the bucket. – init_js Sep 04 '17 at 10:11