9

The synchronize module of Ansible (v1.6.5) prompts for the passphrase (Enter passphrase for key) even though I already entered it at the beginning of running the playbook.

Any idea why?

I run my playbook with the following options:

-u myuser --ask-sudo-pass --private-key=/path/to/id_rsa

Here is my synchronize task:

- name: synchronize source files in src location
  sudo: yes
  synchronize: src={{local_src}} dest={{project_dirs.src}} archive=yes delete=yes rsync_opts=["--compress"]
  when: synchronize_src_files

UPDATE with ssh-agent

Following the advice of Lekensteyn, I tried with ssh-agent. I do not have a prompt anymore but the task fails. What am I missing?

eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa

The error:

TASK: [rolebooks/project | synchronize source files in src location] **********
failed: [10.0.0.101] => {"cmd": "rsync --delay-updates -FF --compress --delete-after --archive --rsh 'ssh -i /home/vagrant/.ssh/id_rsa -o StrictHostKeyChecking=no' --rsync-path=\"sudo rsync\" [--compress] --out-format='<<CHANGED>>%i %n%L' /projects/webapp mike@10.0.0.101:/var/local/sites/project1/src", "failed": true, "rc": 12}
msg: sudo: no tty present and no askpass program specified
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.0]
Michael
  • 8,357
  • 20
  • 58
  • 86

5 Answers5

11

The synchronize command (up to at least Ansible 1.6.6) seems to ignore the normal SSH control socket opened by Ansible. Your task could expand to the following:

{
    "cmd": "rsync --delay-updates -FF --compress --archive
        --rsh 'ssh  -o StrictHostKeyChecking=no'
        --out-format='<<CHANGED>>%i %n%L'
        /home/me/src/ user@host:/dest/",
    "failed": true,
    "rc": 23
}

To get these details, run your playbook with the -v option. As a workaround for this, you can start ssh-agent and add cache your SSH key with ssh-add. Refer to their manual pages for details.

Extra caveats with the synchronize module:

  • When run with sudo: yes, ansible will run with --rsh 'sudo ssh' which will break if the remote sudo configuration requires a password and/ or TTY. Solution: set sudo: no in your task definition.
  • The user that logs into the remote machine is your SSH user (ansible_ssh_user), not the sudo user. I have not found a way to override this user (besides an untested method that overrides the user with -o User option via one of the other options (dest_port="22 -o User=your_user"?) in combination with set_remote_user=yes).

This is taken from my tasks file:

- name: sync app files
  sudo: no
  synchronize: src={{app_srcdir}}/ dest={{appdir}}/
               recursive=yes
               rsync_opts=--exclude=.hg
# and of course Ubuntu 12.04 does not support --usermap..
#,--chown={{deployuser}}:www-data
# the above goes bad because ansible_ssh_user=user has no privileges
#  local_action: command rsync -av --chown=:www-data
#                 {{app_srcdir}}
#                 {{deployuser}}@{{inventory_hostname}}:{{appdir}}/
#  when: app_srcdir is defined
# The above still goes bad because {{inventory_hostname}} is not ssh host...
Lekensteyn
  • 64,486
  • 22
  • 159
  • 192
  • ok so what I am experiencing is the "normal" behaviour, it's not a problem with my case only. Does using the ssh-agent work or it is a suggestion to try? – Michael Jul 18 '14 at 01:31
  • 1
    It is "normal" behavior, but it is undersirable that rsync does not use the control socket of Ansible which should avoid the need to reauthenticate. ssh-agent works for sure, it is what I (and probably most other people) do. It is also mentioned on http://docs.ansible.com/intro_getting_started.html – Lekensteyn Jul 18 '14 at 08:26
  • 1
    @YAmikep The error comes from `sudo` which means that the SSH command was successful. Have I already mentioned that I find the `synchronize` the worst, unintegrated part of ansible? – Lekensteyn Jul 21 '14 at 08:24
  • yes I agree, this module is not the best right now. I cannot use the copy module, it takes too much time so what I have done that works is to change the ownership and permission of the remote folder before using synchronize. I replied to my question with my solution. What do you think? – Michael Jul 21 '14 at 17:01
5

I think by default synchronize is explicitly setting a username on the rsync command - you can prevent this and allow rsync to work from your ssh config file.

http://docs.ansible.com/synchronize_module.html

set_remote_user put user@ for the remote paths. If you have a custom ssh config to define the remote user for a host that does not match the inventory user, you should set this parameter to "no".

I have a remote user configured in my ssh config and needed to add set_remote_user=no to get synchronize to work, otherwise it tried to use the wrong username and neither ssh key nor password would work.

zakuni
  • 51
  • 12
Sean Burlington
  • 873
  • 10
  • 13
2

I tried using the copy module but it takes way too much time. So to make the synchronize module works, I will do the following. It is not perfect but at least it works.

  1. change the ownership and permissions of the destination remote folder to the user I am using

  2. use synchronize without sudo

  3. set back the ownership and permissions of the destination remote to what I wanted before

zakuni
  • 51
  • 12
Michael
  • 8,357
  • 20
  • 58
  • 86
  • If synchronize fails, (3) won't be executed which may result your deployment in a broken state. This could be worked around by adding `ignore_errors: True` and add an additional check after `register: sync_result`, but this is also non-ideal. Perhaps someone has reported this already at the Ansible bug tracker? – Lekensteyn Jul 21 '14 at 20:53
  • @Lekensteyn There is a bug report about "Synchronize module asking for a password during playbook run" https://github.com/ansible/ansible/issues/7071 – Michael Jul 22 '14 at 01:06
2

Disabling tty_tickets in /etc/sudoers on the remote machine fixes this problem (at the cost of slightly reduced security). E.g.,

#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults        env_reset,!tty_tickets
# ...
nwk
  • 4,004
  • 1
  • 21
  • 22
1

The best way to approach this - is to install your key to ssh authorized_keys for root user onto remote server.

podarok
  • 537
  • 5
  • 12
  • 2
    This was probably prematurely downvoted, considering that many remote container services (such as EC2) _technically_ already support root logins (via `ssh ec2-user@node; sudo su`). So it isn't an actual security risk to copy over the keys to root. If anything, it makes life simpler. – robert Mar 10 '16 at 20:12
  • Removing the key from the remote root user after `synchronize` completes would constrain the security risk to a narrow time window. – Derek Mahar Dec 15 '16 at 17:33