0

Goal: I would like to keep sensitive data in s3 buckets and process it on EC2 instances, located in the private cloud. I researched that there is possbility to set up S3 buckets policy by IP and user(iam) arn's thus i consider that data in s3 bucket is 'on the safe side'. But i am worriyng about the next scenario:
1) there is vpc
2) inside theres is an ec2 isntance
3) there is an user under controlled(allowed) account with permissions to connect and work with ec2 instance and buckets. Buckets are defined and configured to work with only with known(authorized) ec2-instances.
Security leak: user uploads malware application on ec2 instance and during processing data executes malware application that transfer data to other(unauthorized) buckets under different AWS account.
Disabling uploading data to ec2-instance is not an option in my case.
Question: is it possible to restrict access on vpc firewal in such way that it will be access to some specific s3 buckets but it will be denied access to any other buckets? Assumed that user might upload malware application to ec2 instance and within it upload data to other buckets(under third-party AWS account).

Nazar
  • 79
  • 2
  • 10
  • So the question is "How can I prevent users from executing uploads from my EC2 instance to their own S3 buckets?" – Christopher Aug 20 '12 at 20:11
  • Yes. User could upload data from EC2 instance to my S3 buckets(Account1) but restrict uploading data to any other buckets( under Account2, Account N, any others). – Nazar Aug 21 '12 at 09:13
  • Are your instances in a private VPC subnet or a public one? I.e. does the subnet routing table contain a 0.0.0.0/0 route to an internet gateway or a NAT instance, or not? – Christopher Aug 21 '12 at 12:15
  • EC2 instance is in VPN. It is possible to add restriction and deny access to certain hosts, but this doesn't solve my problem: i have to allow access to s3 Amazon endpoint: s3.amazonaws.com.Requirement is:certain s3 buckets are allowed - other restricted (others are located under different accounts, are unknown - since no ability to set policies)
    Url filtering also isn't a case: amazon api uses TLS(https) - all data is encrypted. Currently i see only the solution to refuse s3 usage and store all data on EBS service.
    – Nazar Aug 22 '12 at 05:40
  • Anyway Christopher, thanks for trying to help :) – Nazar Aug 22 '12 at 05:41
  • You could use [IAM instance profiles](http://stackoverflow.com/a/11369442/877115) and the instance's temporary credentials to upload to s3 from inside a private subnet, thus avoiding malicious code that uses URL endpoints. That would cover nearly every malicious code snippet I can think of, but obviously not the ones I can't... – Christopher Aug 22 '12 at 08:44

2 Answers2

3

There is not really a solution for what you are asking, but then again, you seem to be attempting to solve the wrong problem (if I understand your question correctly).

If you have a situation where untrustworthy users are in a position where they are able to "connect and work with ec2 instance and buckets" and upload and execute application code inside your VPC, then all bets are off and the game is already over. Shutting down your application is the only fix available to you. Trying to limit the damage by preventing the malicious code from uploading sensitive data to other buckets in S3 should be the absolute least of your worries. There are so many other options available to a malicious user other than putting the data back into S3 but in a different bucket.

It's also possible that I am interpreting "connect and work with ec2 instance and buckets" more broadly than you intended, and all you mean is that users are able to upload data to your application. Well, okay... but your concern still seems to be focused on the wrong point.

I have applications where users can upload data. They can upload all the malware they want, but there's no way any code -- malicious or benign -- that happens to be contained in the data they upload will ever get executed. My systems will never confuse uploaded data with something to be executed or handle it in a way that this is even remotely possible. If your code will, then you again have a problem that can only be fixed by fixing your code -- not by restricting which buckets your instance can access.

Actually, I lied, when I said there wasn't a solution. There is a solution, but it's fairly preposterous:

Set up a reverse web proxy, either in EC2 or somewhere outside, but of course make its configuration inaccessible to the malicious users. In this proxy's configuration, configure it to only allow access to the desired bucket. With apache, for example, if the bucket were called "mybucket," that might look something like this:

ProxyPass /mybucket http://s3.amazonaws.com/mybucket

Additional configuration on the proxy would deny access to the proxy from anywhere other than your instance. Then instead of allowing your instance to access the s3 endpoints directly, only allow outbound http toward the proxy (via the security group for the compromised instance). Requests for buckets other than yours will not make it through the proxy, which is now the only way "out." Problem solved. At least, the specific problem you were hoping to solved should be solvable by some variation of this approach.

Update to clarify:

To access the bucket called "mybucket" in the normal way, there are two methods:

http://s3.amazonaws.com/mybucket/object_key
http://mybucket.s3.amazonaws.com/object_key

With this configuration, you would block (not allow) all access to all S3 endpoints from your instances via your security group configuration, which would prevent accessing buckets with either method. You would, instead, allow access from your instances to the proxy.

If the proxy, for example, were at 172.31.31.31 then you would access buckets and their objects like this:

http://172.31.31.31/mybucket/object_key

The proxy, being configured to only permit certain patterns in the path to be forwarded -- and any others denied -- would be what controls whether a particular bucket is accessible or not.

Michael - sqlbot
  • 169,571
  • 25
  • 353
  • 427
  • Michael,thanks for reply! **Problem clarification**: Yes, i tried to solve a bit unusual problem: users can connect and work at EC2 instances in VPC but shouldn't be able to pull\push any files to the instances.Reason: sensitive data is located at instances that should not be copied to anywhere else. Thus we would like to permit access only to certain buckets, not all S3 service.Idea with reverse web proxy sounds good, but in this case i have to prohibit S3 endpoint and connection to the some set of bucket. In this For now we prohibit S3 service usage at all. – Nazar Dec 28 '12 at 09:32
  • This solution should work for your application. I have added some notes to clarify the implementation details. If I have missed anything, let me know and I'll try to address it. – Michael - sqlbot Dec 28 '12 at 15:55
1

Use VPC Endpoints. This allows you to restrict which S3 buckets your EC2 instances in a VPC can access. It also allows you to create a private connection between your VPC and the S3 service, so you don't have to allow wide open outbound internet access. There are sample IAM policies showing how to control access to buckets.

There's an added bonus with VPC Endpoints for S3 that certain major software repos, such as Amazon's yum repos and Ubuntu's apt-get repos, are hosted in S3 so you can also allow your EC2 instances to get their patches without giving them wide open internet access. That's a big win.

jarmod
  • 71,565
  • 16
  • 115
  • 122