14

I created an EBS volume, attached and mounted it to my Container Instance. In the task definition volumes I set the volume Source Path with the mounted directory. The container data is not beeing created in the mounted directory, all other directories out of the mounted EBS works properly.

The purpose is to save the data out of the container and with this another volume backup it.

Is there a way to use this attached volume with my container? or is a better way to work with volumes and backups.

EDIT: It was tested with a random docker image running it specifying the volume and I faced the same problem. I manage to make it work restarting the Docker service but I'm still looking for a solution without restart Docker.

Inspecting a container with a volume directory that is the mounted EBS

"HostConfig": {
  "Binds": [
  "/mnt/data:/data"
],
...
"Mounts": [
  {
    "Source": "/mnt/data",
    "Destination": "/data",
    "Mode": "",
    "RW": true,
    "Propagation": "rprivate"
  }
],

the directory displays:

$ ls /mnt/data/
lost+found

Inspecting a container with a volume directory that is not the mounted EBS

"HostConfig": {
  "Binds": [
    "/home/ec2-user/data:/data"
  ],
...
"Mounts": [
  {
    "Source": "/home/ec2-user/data",
    "Destination": "/data",
    "Mode": "",
    "RW": true,
    "Propagation": "rprivate"
  }
]

the directory displays:

$ ls /home/ec2-user/data
databases dbms
Conrado Fonseca
  • 632
  • 2
  • 6
  • 19
  • I don't know what you're trying to achieve, but I prefer to make an EBS snapshot, you can schedule it if you want and if something happens you have all your vms configured ready just as you did before, saving a lot of work in the future, besides you can transfer this snapshot to other regions. – Fernando Zamperin Jul 14 '16 at 19:32
  • @FernandoZamperin yes, I want to schedule snapshots for this EBS, but first I need that the container uses it properly – Conrado Fonseca Jul 14 '16 at 20:27
  • Maybe this helps: http://stackoverflow.com/questions/28792272/attaching-and-mounting-existing-ebs-volume-to-ec2-instance-filesystem-issue – Fernando Zamperin Jul 15 '16 at 13:53
  • share the task definition for the volume? – Shibashis Jul 15 '16 at 19:48
  • 1
    I figure that the problem is not with ECS itself. if I run any docker container specifying the volume to the new mounted EBS, it does not work as expected. I manage to make it work restarting the docker after mount the volume but unfortunately it's not an elegant solution. – Conrado Fonseca Jul 18 '16 at 14:51
  • The reason you need to re-start is because mounting is happening after the docker daemon starts. You will need to mount before docker daemon. Using cloud inint and putting the mount in boothook should fix that http://cloudinit.readthedocs.io/en/latest/topics/format.html#mime-multi-part-archive – kriztean Feb 13 '17 at 08:43
  • Probably a better way to do this is to use docker plugins to attach EBS volumes automatically: https://aws.amazon.com/blogs/compute/amazon-ecs-and-docker-volume-drivers-amazon-ebs/ or https://faun.pub/use-ebs-in-aws-ecs-cluster-for-stateful-services-901b7c8b3cb4 – Sarang Jan 06 '22 at 12:24

2 Answers2

4

It sounds like what you potentially want to do is make use of the AWS EC2 Launch Configurations. Using Launch Configurations, you can specify EBS volumes be created and attached to your instance at launch. This happens prior to the docker agent and subsequent tasks being started.

As part of your launch configuration, you'll want to also update the User data under Configure details with something along the lines of:

mkdir /data;
mkfs -t ext4 /dev/xvdb;
mount /dev/xvdb /data;
echo '/dev/xvdb /data ext4 defaults,nofail 0 2' >> /etc/fstab;

Then, so long as your container is setup to access /data on the host, everything will just work the first go.

Bonus: If you're using ECS clusters, I presume you're already making use of Launch Configurations to get your instances joined to the cluster. If not, you can add new instances automatically as well, using something like:

#!/bin/bash 
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro   --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env=ECS_LOGFILE=/log/ecs-agent.log --env=ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"syslog\",\"gelf\"] --env=ECS_LOGLEVEL=info --env=ECS_DATADIR=/data --env=ECS_CLUSTER=your-cluster-here amazon/amazon-ecs-agent:latest

Specifically in that bit, you'll want to edit this part: --env=ECS_CLUSTER=your-cluster-here

Hope this helps.

MrDuk
  • 16,578
  • 18
  • 74
  • 133
0

The current documentation on Using Data Volumes in Tasks seems to address this problem:

Prior to the release of the Amazon ECS-optimized AMI version 2017.03.a, only file systems that were available when the Docker daemon was started are available to Docker containers. You can use the latest Amazon ECS-optimized AMI to avoid this limitation, or you can upgrade the docker package to the latest version and restart Docker.

jfrantzius
  • 623
  • 7
  • 16