0

I have a Dockerized Angular/Node.js app that I'm trying to deploy via GitLab CI.

Using GitLab CI, I build and push the images to the GitLab container registry using a dedicated build VM/server with a Runner on it, and then the images should be pulled and started as containers in another server, i.e. the production server.

This is what my gitlab-ci.yml file looks like right now:

image: docker:latest

#services:
#    - docker:dind

stages:
    - build
    - deploy

build-1:
    stage: build
    only:
        - deploy
    script:
        - docker login -u $GITLAB_USERNAME -p $CI_ACCESS_TOKEN $CI_REGISTRY
        - docker build -t $FRONTEND_IMG .
        - echo Pushing Docker image to GitLab
        - docker push $FRONTEND_IMG
    when: manual
    tags:
        - my-runner

build-2:
  stage: build
  only:
    - deploy
  script:
    - docker login -u $GITLAB_USERNAME -p $CI_ACCESS_TOKEN $CI_REGISTRY
    - docker build -t $BACKEND_IMG .
    - docker push $BACKEND_IMG
  when: manual
  tags:
    - my-runner

deploy-live:
    stage: deploy
    only:
        - deploy
    before_script:
        ## Install ssh-agent if not already installed, it is required by Docker.
        ## (change apt-get to yum if you use an RPM-based image)
        ##
        - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'

        ##
        ## Run ssh-agent (inside the build environment)
        ##
        - eval $(ssh-agent -s)

        ##
        ## Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
        ## We're using tr to fix line endings which makes ed25519 keys work
        ## without extra base64 encoding.
        ## https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
        ##
        - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -

        ##
        ## Create the SSH directory and give it the right permissions
        ##
        - mkdir -p ~/.ssh
        - chmod 700 ~/.ssh

        # - mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
        # - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
        ##
        ## Use ssh-keyscan to scan the keys of your private server. Replace gitlab.com
        ## with your own domain name. You can copy and repeat that command if you have
        ## more than one server to connect to.
        ##
        - ssh-keyscan $SERVER_IP_ADDRESS >> ~/.ssh/known_hosts
        - chmod 644 ~/.ssh/known_hosts
    script:
        - echo SSH to prod server
        - ssh $SERVER_USERNAME@$SERVER_IP_ADDRESS && ip addr show && docker login -u $GITLAB_USERNAME -p $CI_ACCESS_TOKEN $CI_REGISTRY && docker pull $FRONTEND_IMG && docker pull $BACKEND_IMG && docker-compose -f docker-compose.yml up -d
    when: manual
    tags:
        - my-runner
  • The problem is, the docker commands seem to get executed on the build server (instead of the production server where we ssh) and the app is accessible from there but not from the production server.

  • When I run docker images on the production server after the deployment, the list comes up empty. But when I do that in the build server, the images that were built are there.

  • The job seems to complete successfully with no error messages, but I do get these messages:

Pseudo-terminal will not be allocated because stdin is not a terminal.

Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-66-generic x86_64)
  * Documentation:  https://help.ubuntu.com
  * Management:     https://landscape.canonical.com
  * Support:        https://ubuntu.com/advantage
   System information as of Wed Apr 15 00:58:45 UTC 2020
   System load:  0.0               Processes:              110
   Usage of /:   6.0% of 24.06GB   Users logged in:        2
   Memory usage: 26%               IP address for eth0:    x.x.x.x
   Swap usage:   0%                IP address for docker0: x.x.x.x
 121 packages can be updated.
 73 updates are security updates.
 mesg: ttyname failed: Inappropriate ioctl for device

What am I missing or doing wrong?

herondale
  • 729
  • 10
  • 27
  • 1
    `ssh myserver && do_something` will connect with ssh then do_something on current host (when ssh exist). `ssh myserver do_something && otherthing` wiill execute do_something via ssh and otherthing on current host. To run your full set of commands over ssh you have to quote the full command => `ssh myserver "command1 && command2 &&....."` – Zeitounator Apr 17 '20 at 22:15
  • @Zeitounator hi, thanks for this, it helped. But now I think I need to figure out how to use `docker-compose` if my approach is gonna be this. Thank you! – herondale Apr 20 '20 at 13:12

2 Answers2

2

after taking a look at your ci code, you should use ansible when you want to run your container on the production server.

Ansible is better than

ssh myserver "command1 && command2 &&....."
fei yang
  • 83
  • 7
1

I can share with you my ansible file deploy.yml

# https://stackoverflow.com/questions/59384708/ansible-returns-with-failed-to-import-the-required-python-library-docker-sdk-f/65495769#65495769
---
- name: Build
  hosts: local
  connection: local
  tags:
    - build
  tasks:
    - name: Build Image
      community.general.docker_image:
        build:
          path: .
          pull: no
        name: registry.digitalocean.com/main/example-com
        push: true
        source: build
        force_source: yes
      environment:
        DOCKER_BUILDKIT: 1
- name: Deploy
  hosts: remote
  tags:
    - deploy
  vars:
    path: /root/example.com
  tasks:
    - name: Creates directory
      file:
        path: "{{ path }}"
        state: directory
    - name: Copy Docker Compose
      copy:
        src: docker-compose.yml
        dest: "{{ path }}/docker-compose.yml"
    - name: Reload Compose
      community.general.docker_compose:
        pull: yes
        project_src: "{{ path }}"

And gitlab ci file .gitlab-ci.yml

variables:
  DOCKER_REGISTRY_DOMAIN: "registry.digitalocean.com"
  DOCKER_HOST: tcp://docker:2375
  DOCKER_TLS_CERTDIR: ""
  DOCKER_DRIVER: overlay2

image: docker:latest

services:
  - docker:dind

.deploy:
  image: archlinux:latest
  stage: deploy
  before_script:
    - pacman -Sy make ansible python python-pip openssh docker --noconfirm
    - docker login -u ${DOCKER_TOKEN} -p ${DOCKER_TOKEN} ${DOCKER_REGISTRY_DOMAIN}
    - pip3 install docker docker-compose
    - eval $(ssh-agent -s)
    - ssh-add <(echo $SSH_PRIVATE_KEY_BASE64 | base64 -d)
    - mkdir -p ~/.ssh
    - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
  script:
    - ansible-playbook -i hosts deploy.yml

deploy:
  extends: .deploy
  environment:
    name: production
    url: https://example.com
  only:
    refs:
      - prod

Finally hosts

[local]
127.0.0.1 env=prod

[remote]
xxx.xxx.xxx ansible_user=root env=prod

Like fei yang I too recommend replace ssh command by ansible.


I think that problem can be there:

ssh $SERVER_USERNAME@$SERVER_IP_ADDRESS && ip addr show

In my computer I can run:

curl ipinfo.io

and get:

{
  "ip": "193.118.225.242"
...

Then I typed:

ssh root@104.248.40.145 && curl ipinfo.io

I see:

Last login: Thu Jun 17 07:53:38 2021 from 193.118.225.242

I am logged in to server and I can't see resutls of ipinfo

When I typed

exit

to logout from remote server I can see:

logout
Connection to 104.248.40.145 closed.
{
  "ip": "193.118.225.242",

To execute command remotelly by ssh you should not use && but "" eg.

ssh root@104.248.40.145 "curl ipinfo.io"
Daniel
  • 7,684
  • 7
  • 52
  • 76