6

I'm trying to automate some of my manual tasks on a VM. As part of that my VM doesn't have direct root access. So I've to use a different user and then escalate to root. When I try to switch to root user, the password prompt is different than the default prompt. The prompt I see is as shown below

================== [user1@vm-1 tmp]$ su - root

Enter login password:

I wrote a playbook to test the connectivity. The play looks as below

=====================================

 hosts: vm-1
 any_errors_fatal: true
 become: true
 become_method: su
 become_user: root
 gather_facts: no
 vars:
 ansible_become_pass: "r00t"
 tasks:
 name: Test me
 command: 'echo works'

=====================================

My host file looks as below

localhost ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
vm-1 ansible_ssh_host=1.2.3.4 
ansible_connection=ssh 
ansible_ssh_user=user1 
ansible_ssh_pass=password  
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'

=====================================

With this config, when I try to run the play, I'm getting below error

fatal: [vm-1]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}

The same playbook works on a different VM but the prompt while trying to switch user to root is simply "Passowrd"

Appreciate your help on this.

By the way I tried this in ansible 2.4, 2.5 versions. In both the releases I got the same error.

Thanks in advance. Ramu

Mohammad
  • 1,549
  • 1
  • 15
  • 27
Ramu Akula
  • 61
  • 1
  • 1
  • 2
  • I think this bug is mentioned in github for some Ansible 2.x versions. You can trying increasing the timeout by adding timeout=30 in the ansible.cfg – AHT Jul 30 '18 at 16:30
  • 1
    I was getting this exact error with 2.8.2 and it ended up that my remote VM had it's DNS settings wrong. After correcting `/etc/resolv.conf` I no longer got the error. – Server Fault Jul 16 '19 at 14:12
  • @ServerFault can you tell us how did you correct 'resolv.conf' file, and what was wrong with it? A link to gist would be much appreciated. Thanks! – Lukasz Dynowski Aug 24 '19 at 20:31
  • @LukaszDynowski - I had migrated my VM from a testing environment, into production. The DNS resolvers in testing have a different IP address than production. Once I reconfigured `/etc/resolv.conf` to point to the correct host, the problem went away. I would surmise since the user initiating `sudo` was an LDAP account, ansible was unable to find the LDAP server via DNS in order to authenticate the elevated session. Pretty basic resolver (NOTE: I do not use the `resolvconf` Ubuntu package on my servers) but here it is: https://pastebin.com/dL38PujC – Server Fault Aug 26 '19 at 13:52
  • @ServerFault My use case is slightly different, I know the IP addresses of my production machines, and I work in LDAP less setup . But many thanks for your input, much appreciated :) – Lukasz Dynowski Aug 26 '19 at 14:16

1 Answers1

2

I had difficulties tracking down an open ticket but here is one that is closed and has some workarounds and some solutions that may or may not work for you: https://github.com/ansible/ansible/issues/14426

I have had at least two machines where none of the listed solutions work. It also slows down a direct SSH without Ansible and a reboot does not work. I was unable to figure out the issue so now I just rebuild the machine.

As @AHT said, you could just increase the timeout to 30 seconds in ansible.cfg, however, I think this should only be temporary being it is masking the bigger issue.