0

Say I have a containerized batch job in AWS. It needs to mount a snapshot, do some work with it for a few hours, then shutdown. The catch is that at template/AMI creation, the specific snapshot is not known/changes. The correct one will be tagged, and is straightforward to look up at runtime though. How do I mount it?

create/attach-volume can be run within the container, adding the snapshot volume to the host. I can define the job to have a mountpoint in advance to access the mounted volume once it becomes available. The problem is, I don't see a way from within the container to have the host actually mount the attached volume, so it's in /dev, but no further.

1 Answers1

1

Turns out the answer is a pipe. Modifying from this answer: https://stackoverflow.com/a/63719458/2062731

Adding the following to user-data sets up a monitor.

mkfifo /var/mount_pipe
while true; do sudo mount $(cat /var/mount_pipe); done &

mount_pipe is bind mounted to the container and commands are sent from within the container like echo "/dev/sdh /mnt/foo" > /var/pipe

Bonus: To make sure the newly mounted volume is visible within the container, you'll need bind propagation set to something non-private which AWS doesn't support directly. When defining your mountpoints in the job definition, use

ContainerPath: "/mnt/foo:rslave" # Add a `:` to push extra args.

Since the arg gets passed to the docker agent unchanged, the final product will be --volume /mnt/foo:/mnt/foo:rslave. This is undocumented, so beware, but I don't know of another way to set bind propagation args in AWS.