5

I'm on an EC2 instance and I wish to connect my PHP website with my Amazon S3 bucket, I already saw the API for PHP here: http://aws.amazon.com/sdkforphp/ but it's not clear.

This is the code line I need to edit in my controller:

thisFu['original_img']='/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I need to connect to Amazon S3 and be able to change the code like this:

$thisFu['original_img']='my_s3_bucket/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';

I already configured an IAM user for the purpose but I don't know all the steps needed to accomplished the job.

How could I connect and interact with Amazon S3 to upload and retrieve public images?

UPDATE

I decided to try using the s3fs as suggested, so I installed it as described here (my OS is Ubuntu 14.04)

I run from console:

sudo apt-get install build-essential git libfuse-dev libcurl4-openssl-dev libxml2-dev mime-support automake libtool
sudo apt-get install pkg-config libssl-dev
git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
sudo make install

Everything was properly installed but what's next? Where should I declare credentials and how could I use this integration in my project?

2nd UPDATE

I created a file called .passwd-s3fs with a single code line with my IAM credentials accessKeyId:secretAccessKey.

I place it into my home/ubuntu directory and give it a 600 permission with chmod 600 ~/.passwd-s3fs

Next from console I run /usr/bin/s3fs My_S3bucket /uploads/fufu

Inside the /uploads/fufu there are all my bucket folders now. However when I try this command:

s3fs -o nonempty allow_other My_S3bucket /uploads/fufu

I get this error message:

s3fs: unable to access MOUNTPOINT My_S3bucket : No such file or directory

3rd UPDATE

As suggested I run this fusermount -u /uploads/fufu, after that I checked the fufu folder and is empty as expected. After that I tried again this command (with one more -o):

s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

and got this error message:

fusermount: failed to open /etc/fuse.conf: Permission denied
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf

Any other suggestion?

4th UPDATE 18/04/15

Under suggestion from console I run sudo usermod -a -G fuse ubuntu and sudo vim /etc/fuse.conf where I uncommented mount_max = 1000 and user_allow_other

Than I run s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

At first sight no errors, so I thought everythings fine but it's exactly the opposite.

I'm a bit frustrated now, because I don't know what happened but my folder /uploads/fufu is hidden and using ls -Al I see only this

d????????? ? ?        ?              ?            ? fufu

I cannot sudo rm -r or -rf or mv -r it says that /uploads/fufu is a directory

I tried to reboot exit and mount -a, but nothing.

I tried to unmount using fusermount and the error message is fusermount: entry for /uploads/fufu not found in /etc/mtab

But I tried sudo vim /etc/mtab and I found this line: s3fs /uploads/fufu fuse.s3fs rw,nosuid,nodev,allow_other 0 0

Could someone tell me how can I unmount and finally remove this folder /uploads/fufu ?

Andrew Gaul
  • 2,296
  • 1
  • 12
  • 19
NineCattoRules
  • 2,253
  • 6
  • 39
  • 84

4 Answers4

2

To give you a little more clarity, since you are a beginner: download the AWS SDK via this Installation Guide

Then set up your AWS account client on your PHP webserver using this bit

use Aws\S3\S3Client;

$client = S3Client::factory(array('profile' => '<profile in your aws credentials file>'
));

If you would like more information on how to use AWS credentials files, head here.

Then to upload a file that you have on your own PHP server:

$result = $client->putObject(array(
'Bucket'     => $bucket,
'Key'        => 'data_from_file.txt',
'SourceFile' => $pathToFile,
'Metadata'   => array(
    'Foo' => 'abc',
    'Baz' => '123'
)
));

If you are interested in learning how to upload images to a php file, I would recommend looking at this W3 schools tutorial. This tutorial can help you get off the ground for saving the file locally on your server in a temporary directory before it gets uploaded to your S3 bucket.

Erik
  • 1,246
  • 13
  • 31
  • Finally someone! Thanks for reply, however I cannot install anything like that, indeed I'm on Amazon EC2...is it that difficult manually? I thought was enough take all folders and place in root :) – NineCattoRules Apr 15 '15 at 21:56
  • 1
    In the first link for installing the SDK, go to the Install from zip instructions and instead of the line `use Aws\S3\S3Client;` use `require '/path/to/aws-autoloader.php';` – Erik Apr 15 '15 at 21:59
  • That is clear but where I supposed to place into my project? I have my own framework, do I place this inside root folder? or admin folder perhaps? What's the difference if any – NineCattoRules Apr 15 '15 at 22:24
  • Place it whereever you would like inside your directory with the PHP code you use! – Erik Apr 16 '15 at 15:33
  • :D ok that's great...I asked only for security reasons – NineCattoRules Apr 16 '15 at 15:34
  • sorry, why not injecting directly like shown here: http://docs.aws.amazon.com/aws-sdk-php/v2/guide/credentials.html#passing-credentials-into-a-client-factory-method – NineCattoRules Apr 16 '15 at 18:06
  • I don't know what you mean – Erik Apr 16 '15 at 18:24
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/75437/discussion-between-erik-and-simone). – Erik Apr 16 '15 at 20:44
1

I agree the documentation at that link is a bit hard to dig and leaves a lot of dots to be connected.

However, I found something a lot better here: http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-s3.html

It has sample code and instructions for almost all the S3 operations.

Noman Ur Rehman
  • 6,707
  • 3
  • 24
  • 39
1

A much easier and transparent to your application setup is simply to mount the s3 partition with s3fs

https://github.com/s3fs-fuse/s3fs-fuse

(Use option allow_other) Your s3fs then behaves like a normal folder would just move the file to that folder. S3fs then uploads

S3fs is very reliable in recent builds

You can read images this way also, But loose any effect that the AWS CDN has, though last time i tried it it wasn't a huge difference

you need a s3fspassword file

in the format accessKeyId:secretAccessKey

it can be in either of these places

using the passwd_file command line option
setting the AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables
using a .passwd-s3fs file in your home directory
using the system-wide /etc/passwd-s3fs file

The file needs 600 permissions

https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon

Has some info.

When that is complete the command is s3fs bucket_name mnt_dir

you can find the keys here

https://console.aws.amazon.com/iam/home?#security_credential

from the example about i would assume your mnt_dir would be /uploads/fufu

so s3fs bucket /uploads/fufu

to your second problem

s3fs -o nonempty allow_other My_S3bucket /uploads/fufu

is wrong, you need to specify -o again

s3fs -o nonempty -o allow_other My_S3bucket /uploads/fufu

the user you are mounting as needs to be in the fuse group

sudo usermod -a -G fuse your_user

or sudo addgroup your_user fuse

exussum
  • 18,275
  • 8
  • 32
  • 65
  • Thanks, so basically using this way I cannot use Cloudfront as CDN? I activated Cloudfront and for origin I have placed my bucket – NineCattoRules Apr 17 '15 at 09:52
  • 1
    If the only change you make is using s3fs then yes, You can write the image to s3 with s3fs and then use the Cloudfront links with the API. But would need code changes – exussum Apr 17 '15 at 10:10
  • OK, I'm at last step, so do I need to add a fixed mount point in `/etc/fstab`? I need to make the bucket public, so is it correct this? `s3fs#mybucket /uploads/fufu fuse allow_other 0 0` – NineCattoRules Apr 17 '15 at 12:00
  • I tried from console the command I posted here above, I got this error: `s3fs#elasticbeanstalk-eu-west-1-(my number): command not found` – NineCattoRules Apr 17 '15 at 12:20
  • no hash just `s3fs -o allow_other mybucket /ubloads/fufu` make sure it works from the command line before adding to fstab – exussum Apr 17 '15 at 12:21
  • ok, I got this message: `s3fs: MOUNTPOINT directory /uploads/fufu is not empty. s3fs: if you are sure this is safe, can use the 'nonempty' mount option.` What can I do? – NineCattoRules Apr 17 '15 at 12:25
  • I found this [solution](http://stackoverflow.com/questions/20271101/what-happens-if-you-mount-to-a-non-empty-mount-point-with-fuse#answer-20271228) – NineCattoRules Apr 17 '15 at 12:46
  • I tried with `-o nonempty` and I got another error: `s3fs: unable to access MOUNTPOINT elasticbeanstalk-eu-west-1-(my number): No such file or directory` – NineCattoRules Apr 17 '15 at 13:07
  • 1
    are there files in the dir already ? try `mv /uploads/fufu /uploads/fufu.old; mkdir /uploads/fufu;` then try the s3fs command again – exussum Apr 17 '15 at 13:41
  • I forgot to check inside the folder...now there are all my bucket folders into /uploads/fufu, but when I run the s3fs command I get that error message again – NineCattoRules Apr 17 '15 at 14:37
  • 1
    the non empty command probably didnt help. try unmounting the s3fs (fusermount -u /uploads/fufu) confirm its no longer mounted (doesnt appear in output of `df`), make sure nothing exists in folder. (ls -la) is there is nothing there do the s3fs command again. Everything should work after that. Ived edited my answer. with an empty dir you shouldnt need the non empty option though – exussum Apr 17 '15 at 15:05
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/75531/discussion-between-exussum-and-simone). – exussum Apr 17 '15 at 15:59
  • There seems to be an argument missing (the bucket to mount). Remove all entry's that auto mount it. Mount using command line until your gaooy everything is working. – exussum Apr 18 '15 at 23:25
  • Sorry but what did you mean with "all entry's"? Are you talking about delete the line inside `mtab`? I tried but nothing has changed – NineCattoRules Apr 19 '15 at 09:56
1

Despite to "S3fs is very reliable in recent builds", I can share my own experience with s3fs and info that we moved write operation from direct s3fs mounted folder access to aws console(SDK api possible way also) after periodic randomly system crashes .

Possible that you won't have any problem with small size files like images, but it certainly made the problem while we tried to write mp4 files. So last message at log before system crash was:

kernel: [ 9180.212990] s3fs[29994]: segfault at 0 ip 000000000042b503 sp 00007f09b4abf530 error 4 in s3fs[400000+52000]

and it was rare randomly cases, but that made system unstable.

So we decided still to keep s3fs mounted, but use it only for read access

Below I show how to mount s3fs with AIM credentials without password file

#!/bin/bash -x
S3_MOUNT_DIR=/media/s3
CACHE_DIR=/var/cache/s3cache

wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz
tar xvfz s3fs-1.74.tar.gz
cd s3fs-1.74
./configure
make
make install

mkdir $S3_MOUNT_DIR
mkdir $CACHE_DIR

chmod 0755 $S3_MOUNT_DIR
chmod 0755 $CACHE_DIR

export IAMROLE=`curl http://169.254.169.254/latest/meta-data/iam/security-credentials/`

/usr/local/bin/s3fs $S3_BUCKET $S3_MOUNT_DIR  -o iam_role=$IAMROLE,rw,allow_other,use_cache=$CACHE_DIR,uid=222,gid=500

Also you will need to create IAM role that assigned to the instance with attached policy:

{"Statement":[{"Resource":"*","Action":["s3:*"],"Sid":"S3","Effect":"Allow"}],"Version":"2012-10-17"}

In you case, seems it is reasonable to use php sdk (other answer has usage example already), but you also can write images to s3 with aws console:

aws s3 cp /path_to_image/image.jpg s3://your_bucket/path

If you will have IAM role created and assigned to your instance you won't need to provide any additional credentials

Update - answer to your question:

  • I don't need to include the factory method for declare my IAM credentials?

Yes if you will have IAM assigned to ec2 instance, then at code you just need to create the client as:

     $s3Client = S3Client::factory();
     $bucket = 'my_s3_bucket';
     $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
     $localFilePath = '/local_path/some_image.jpg';

 $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key'    => $keyname,
        'SourceFile'   => $filePath,
        'ACL'    => 'public-read',
        'ContentType' => 'image/jpeg'
    ));
    unlink($localFilePath);

option 2: If you do not need file local storage stage , but will put direclt from upload form:

 $s3Client = S3Client::factory();
 $bucket = 'my_s3_bucket';
 $keyname = $_POST['cat'].'/original_'‌​.uniqid('fu_').'.jpg';
 $dataFromFile = file_get_contents($_FILES['uploadedfile']['tmp_name']); 

$result = $s3Client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => $keyname,
    'Body' => $dataFromFile,
    'ACL'    => 'public-read',
));

And to get s3 link if you will have public access

$publicUrl = $s3Client->getObjectUrl($bucket, $keyname);

Or generate signed url to private content:

$validTime = '+10 minutes';
$signedUrl = $s3Client->getObjectUrl($bucket, $keyname, $validTime);
Evgeniy Kuzmin
  • 2,384
  • 1
  • 19
  • 24
  • thanks for the suggestion...I like to use the php sdk but not sure how to, are you saying that I don't need to include the factory method for declare my IAM credentials? Is it possible to make a variable for the bucket url and use it like: `my_s3_bucket='http://link/to/my/s3` and then `$thisFu['original_img']='my_s3_bucket/uploads/fufu/'.$_POST['cat'].'/original_'.uniqid('fu_').'.jpg';` ? – NineCattoRules Apr 18 '15 at 09:08
  • Sorry for delay but I don't receive notification about new update. Wow, thank you for your great and detailed explanation, that seems so easy...instead days ago I tried with s3fs and now my upload folder is hidden and not accessible in any way (you can read my 4th update above). I wish to try this method but until I have the problem with my upload folder I cannot even try. – NineCattoRules Apr 19 '15 at 10:07
  • 1
    I think if I just make edit of post then notification doesn't come...I was glad that info was usefull for you :) about you problem at update 4: I'm little bit lost what exactly your problem? Disable s3fs? Why you can't just come to source folder and make: "sudo make uninstall"? btw I see you mentioned some permissions issue, did you make commands from root or sudo? – Evgeniy Kuzmin Apr 19 '15 at 10:28
  • I ran commands from sudo...now I entered inside s3fs-fuse folder and I run `sudo make uninstall`, but seems nothing has changed, the folder is there but hidden and I cannot remove it, indeed I tried to remove and I got `sudo rm -R rm: cannot remove ‘fufu’: Is a directory` than I tried also to change ownership to the folder and I got `Transport endpoint is not connected` – NineCattoRules Apr 19 '15 at 11:35
  • I tried `sudo service apache2 restart`...do you think should I try to reboot my instance? – NineCattoRules Apr 19 '15 at 12:08
  • 1
    apache knows nothing about s3fs, reboot your instance – Evgeniy Kuzmin Apr 19 '15 at 12:18
  • You are simply amazing! Sad that you came here too late for my bounty – NineCattoRules Apr 19 '15 at 14:06