2

I have been trying for a while to copy files via ssh from a remote server (not gihub) inside the docker image I want to build, but I can't connect to host. Here is the Dockerfile up until the critical point:

FROM r-base:latest

### Install libs
RUN apt-get update && apt-get install -y \
    sudo \
    gdebi-core \
    pandoc \
    pandoc-citeproc  \
    openssh-server \
    openssh-client \
    libcurl4-gnutls-dev \
    libcairo2-dev \
    libxt-dev \
    xtail \
    wget \
    libssl-dev \
    libxml2 \
    libxml2-dev \
    libv8-dev \
    curl \
    gnupg \
    git

COPY ./setup setup

RUN mv setup/.ssh ~/.ssh
RUN touch ~/.ssh/known_hosts
RUN chmod -R 400 ~/.ssh
RUN ssh-agent sh -c 'ssh-add /root/.ssh/id_rsa'
#RUN eval "$(ssh-agent -s)"
#RUN ssh-add -K ~/.ssh/id_rsa #This is commented out as it causes an error
RUN ssh-keyscan hostname > ~/.ssh/known_host
RUN ssh-keygen -R hostname

## THIS IS THE COMMAND WE NEED TO RUN...
RUN scp -r user@hostname:/path/to/folder ./

The owner of the folder is user. The id_rsa.pub was added to the authorized_keys file of the user user on the host, and ssh was restarted there. However I get a Failed authentication error. I tried to use my personal id_rsa which works from the command line, but it also fails inside docker. Is this a docker issuor is it solvable?

alexandra
  • 1,182
  • 13
  • 24
  • did you manage running the scp command from inside the container? – gCoh Jun 25 '19 at 13:24
  • @gCoh I can run it inside the container, but I am prompted to enter 'yes' for the host authenticity and the the user password. – alexandra Jun 25 '19 at 13:32

2 Answers2

3

I finally managed to do it by generating a key with the command suggested in this post

So to reproduce my case, locally:

cd setup/.ssh/
ssh-keygen -q -t rsa -N '' -f id_rsa

Then on the server add the id_rsa.pub contents to the known hosts for the user user. Can copy the contents to clipboard using xclip: xclip -sel clip < setup/.ssh/id_rsa.pub)

Dockerfile:

I have been trying for a while to copy files via ssh from a remote server (not gihub) inside the docker image I want to build, but I can't connect to host. Here is the Dockerfile up until the critical point:

FROM r-base:latest

### Install libs
RUN apt-get update && apt-get install -y \
    sudo \
    gdebi-core \
    pandoc \
    pandoc-citeproc  \
    openssh-server \
    openssh-client \
    libcurl4-gnutls-dev \
    libcairo2-dev \
    libxt-dev \
    xtail \
    wget \
    libssl-dev \
    libxml2 \
    libxml2-dev \
    libv8-dev \
    curl \
    gnupg \
    git

COPY ./setup setup

RUN chmod -R 600 ~/.ssh
RUN echo "IdentityFile /root/.ssh/id_rsa" >> /etc/ssh/ssh_config
RUN echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config

## THIS IS THE COMMAND WE NEED TO RUN...
RUN scp -r user@hostname:/path/to/folder ./
alexandra
  • 1,182
  • 13
  • 24
0

There’s no specific requirement that you must do everything inside your Dockerfile. Especially things that require remote ssh access are better done outside Docker: consider that anyone who gets your image later on can docker cp a valid ssh key out of it and potentially get access to your internal systems.

For Docker caching reasons, it’s also not a good idea to git clone or otherwise try to remotely retrieve your application from inside the Dockerfile. If you re-run docker build, and nothing else in your Dockerfile has changed, then Docker will skip over the scp step too, even if the remote content has changed.

My general recommendation would be to copy this content from outside the Dockerfile, then build it

# Using whatever credentials are in your local ssh-agent
scp -r user@hostname:/path/to/stuff dist/

# Then your Dockerfile doesn’t need scp or credentials
docker build .

Your Dockerfile then doesn’t need a bunch of extra packages that are only relevant to this path: you should be able to remove sudo openssh-server openssh-client xtail curl gnupg git without actually affecting the single main process you’re trying to run inside your container.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Thanks for the answer! The idea was also that this image will be deployed on a server (maybe also for other users) and the code could be updated in the running container by pulling. Something like a deploy-key on github. – alexandra Jun 25 '19 at 15:43
  • That’s not how Docker typically works. If you need to update the code, you generally build a new image, delete the existing container(s), and restart with the new image. You don’t patch or update containers, just replace them. – David Maze Jun 25 '19 at 15:47
  • The update of the code would be bug fixes, if any. New realeses will be in new images. Docker works for you, you don't work for docker :)) – alexandra Jun 26 '19 at 07:53
  • Also why install git in the container if you don't get to use it? – alexandra Jun 26 '19 at 08:07