9

I am trying to create a ftp server ( using windows/linux/mac - no concern) which would have its storage as a Amazon S3 storage. Now note that S3 does not support FTP natively so this would need some kind of hack as a solution.

I researched about the topic and found various solutions but am not really convinced be any of those. Them being:

  1. Amazon EC2 + TntDrive
  2. Using SME
  3. Creating an EC2 instance and installing FTP server and mounting S3 as local filesystem.

I am trying to find the best solution in terms of security and flexibility/smoothness. Which solution do you think is the best and how to achieve the above?

Edit 1 :

I am very interested in the following solution. here is what I gather : You can attach the EBS volume to an EC2 instance and run an FTP server on that instance. Point the FTP server to the attached EBS volume, then just FTP up your file - it will be written directly to the EBS volume. You would want to use an FTP server and client that can support resuming interrupted transfers - for example, FileZilla. Am I correct when I assume all of the above ?

Also can anyone give the step by step procedure on how to achieve this?

Scooby
  • 3,371
  • 8
  • 44
  • 84

3 Answers3

10

The answer really depends.

First, let me say FTP is a terrible and insecure protocol. Make sure you have a good reason before going down this route. There are plenty of user-friendly S3 tools.

Second, please note that none of these solutions will scale like S3 does. Each solution has arbitrary limits on how many files it can support, how large the files can be, and what happens if a file is updated frequently (i.e. it may save the wrong version). S3 Filesystems look neat at first, but when they have problems, they are hard to troubleshoot (They can only return generic filesystem error messages) and a harder to fix.

Some ideas:

  • If you really just want cloud backup, consider using EBS instead of S3. Either attach an EBS drive to an EC2 box, or run Storage Gateway on your local box.

  • depending on the read/write patterns and the delays, and the size of the files, etc, you might use something like s3sync instead. Have it download all your files, then do a bi-directional re-sync to S3 periodically to pickup any new files or delete any files that have been deleted in S3.

  • If you only need to support uploads, just have a cron job that uploads new files to S3 periodically, then deletes them.

BraveNewCurrency
  • 12,654
  • 2
  • 42
  • 50
  • Hey! Thanks for your solutions. I am very interested in your first solution. here is what I gather : You can attach the EBS volume to an EC2 instance and run an FTP server on that instance. Point the FTP server to the attached EBS volume, then just FTP up your file - it will be written directly to the EBS volume. You would want to use an FTP server and client that can support resuming interrupted transfers - for example, FileZilla. Am I correct when I assume all of the above ? – Scooby Jun 17 '13 at 20:50
  • Yes, with an EBS drive, it's very much like a "regular" computer. To get started, use an EBS root volume. Advanced users will use a config tool (Chef, puppet, etc) to configure FTP on an ephemeral root instance, and only store the FTP data on an EBS drive. That way you don't need to backup the OS. – BraveNewCurrency Jun 21 '13 at 01:15
  • If you have ssh access to the server, try SFTP with FileZilla, you don't event have to setup anything else and it's actually more secure than FTP. – boh Dec 27 '13 at 23:34
  • Oh and I am not sure if you happen to stop the ec2 instance (upgrade it for e.g) then the data in EBS is gone or not. – boh Dec 27 '13 at 23:37
  • Stopping and starting don't delete the data, but it will change the IP address. You can use EIP to keep the IP address constant, or just register the box in DNS (i.e. Route53) to keep the name resolving to your data. – BraveNewCurrency Dec 30 '13 at 01:15
5

What you could try.. Using s3fs, mount your s3 bucket to a directory within your Amazon EC2 instance - using a bit of: sudo s3fs -o allow_other,uid=12345,gid=12345 my-bucket my-ftp-directory/

Then set up vsftpd or any other FTP program, create a user and assign their home directory to be that of my-ftp-directory. Chroot this user to this directory, then try and FTP in using the users credentials and the ip of the EC2 Instance.. I haven't tried it yet, but after mounting a bucket using this technique to my public files directory in Drupal, it's worked fine!

williamsowen
  • 477
  • 1
  • 7
  • 22
  • See related post with details on this approach: http://stackoverflow.com/questions/23939179/ftp-sftp-access-to-an-amazon-s3-server – jwadsack Feb 26 '15 at 21:17
2

You can also use: FTP 2 Cloud

While FTP 2 Cloud is in beta:

it's free.
there are no copy limits.
each account has 100MB storage space.
supports FTP to Amazon S3 copy.
supports FTP to Rackspace copy.
you use at your own risk.
it needs your love to get the word out.
warmth
  • 467
  • 6
  • 10