0

So on my local machine something like this work fine:

library(arrow)
library(aws.s3)

Sys.setenv(
    "AWS_ACCESS_KEY_ID" = Sys.getenv("awsaccesskey"),
    "AWS_SECRET_ACCESS_KEY" = Sys.getenv("awssecret"),
    "AWS_DEFAULT_REGION" = "eu-west-2"
)

feather_data <- s3read_using(read_feather, bucket = "amazingbucket", object = "somefile.feather")

If I wrap this into a docker image, and I want to avoid hardcoding AWS_ACCESS_KEY_ID + AWS_SECRET_ACCESS_KEY, which come here from Windows environment variables, how does ECR get this information?

cs0815
  • 16,751
  • 45
  • 136
  • 299

1 Answers1

1

When you run this in AWS there is a notion of IAM role you can attach to your execution environment. If you are running your container, say, on ECS, you will attach to your task an IAM role. If you are running your container in EKS you will use this method.

The long story short is that AWS will inject those value dynamically (and it will rotate the temp creds) and the AWS SDK will be able to source that information automatically.

mreferre
  • 5,464
  • 3
  • 22
  • 29
  • is AWS secrets manager also an option? – cs0815 Dec 22 '21 at 10:54
  • 1
    Perhaps you could consider using Secrets Manager to host those credentials but usually we would suggest using it for non-AWS secrets (secrets we don't control the lifecycle of). For AWS secrets the "role" pattern is definitely the most widely used. – mreferre Dec 22 '21 at 12:57