240

Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question:

GATHERING FACTS ***************************************************************
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:yy:zz:....
Are you sure you want to continue connecting (yes/no)?

I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.

Johan
  • 37,479
  • 32
  • 149
  • 237

14 Answers14

344

Two options - the first, as you said in your own answer, is setting the environment variable ANSIBLE_HOST_KEY_CHECKING to False.

The second way to set it is to put it in an ansible.cfg file, and that's a really useful option because you can either set that globally (at system or user level, in /etc/ansible/ansible.cfg or ~/.ansible.cfg), or in an config file in the same directory as the playbook you are running.

To do that, make an ansible.cfg file in one of those locations, and include this:

[defaults]
host_key_checking = False

You can also set a lot of other handy defaults there, like whether or not to gather facts at the start of a play, whether to merge hashes declared in multiple places or replace one with another, and so on. There's a whole big list of options here in the Ansible docs.


Edit: a note on security.

SSH host key validation is a meaningful security layer for persistent hosts - if you are connecting to the same machine many times, it's valuable to accept the host key locally.

For longer-lived EC2 instances, it would make sense to accept the host key with a task run only once on initial creation of the instance:

- name: Write the new ec2 instance host key to known hosts
  connection: local
  shell: "ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts"

There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should manage host key checking differently per logical environment.

  • Leave checking enabled by default (in ~/.ansible.cfg)
  • Disable host key checking in the working directory for playbooks you run against ephemeral instances (./ansible.cfg alongside the playbook for unit tests against vagrant VMs, automation for short-lived ec2 instances)
Geert Smelt
  • 987
  • 1
  • 7
  • 19
nikobelia
  • 4,595
  • 1
  • 19
  • 27
  • 7
    Anybody know what's the best practice here? For instance, you could periodically run a script to reset your known hosts, which would be more secure (unless subjected to a MITM attack durring that window). Ignoring authenticity by default eliminates one of SSH primary security mechanisms – TonyH Mar 29 '17 at 19:12
  • 3
    I like the pattern my team use: we put ansible.cfg files that disable host key checking in the working directories for playbooks we run against ephemeral instances (unit tests which run on vagrant VMs, AWS ec2 instances, etc) and leave checking enabled at system level. – nikobelia Mar 30 '17 at 19:24
  • 1
    That way, you can manage host key checking *per logical environment*. There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should have different defaults for those different use cases. – nikobelia Mar 30 '17 at 19:26
  • 2
    If some mechanism is used to provision new machines, permanent or temporary, that mechanism should provide you with the SSH public key of this machine. You can then store it in your various local `known_hosts` files for SSH and Ansible to recognize the machine. Failing to do so, especially by disabling host key checking, degrades the security of SSH to almost zero, and allows MITM attacks. A lot of machines felt to be in an "internal network" are actually connected to the Internet, where a single faster DNS response lets you talk to the attacker instead of your target. – aef Sep 18 '17 at 12:15
  • 2
    @TonyH when setting up many hosts via AWS Cloudformation and Ansible, I ran `ssh-keyscan ` on a trusted machine (for me, it's a bastion/jump host) inside the same network, and put the results in `known_hosts` For setting up that trusted host, AWS exposes the host key in the instance's startup logs, so hunting down that key was one manual step I never cut out if I was doing a complete recreation of my environment. But that host usually didn't need to be deleted. [This](https://unix.stackexchange.com/a/276007) may help. – dcc310 Jan 17 '18 at 02:54
  • Any way to prevent adding thousands of repeated lines to your `known_hosts` this way? – xjcl Jul 23 '20 at 12:25
  • If you're spinning up short lived instances, I would suggest you disable host key checking for the playbook using one of the suggestions above (either disable with an env var or your local ansible.cfg). If you're running Ansible against persistent instances, use @dcc310's suggestion to add the key to your known_hosts file. – nikobelia Jul 26 '20 at 18:21
58

I found the answer, you need to set the environment variable ANSIBLE_HOST_KEY_CHECKING to False. For example:

ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ...
Johan
  • 37,479
  • 32
  • 149
  • 237
  • 2
    Yes, but you said you're using this for a new server you've just setup. This avoids having to deal with the host key this time, but what about subsequent SSH connections? Your setup script runs, configures the server, and it's done. Now you have other playbooks you run, say, or you have scripts that use SSH. Now _they're_ broken because the host key is still not in known_hosts. You've only delayed your problem. In short, what you've written here doesn't sound like a good answer to the question you asked. – Todd Walton Jun 21 '18 at 15:40
  • This is used in a bash script when creating _new_ servers, it's not used for anything else. – Johan Jan 11 '20 at 06:08
20

Changing host_key_checking to false for all hosts is a very bad idea.

The only time you want to ignore it, is on "first contact", which this playbook will accomplish:

---
- name: Bootstrap playbook
  # Don't gather facts automatically because that will trigger
  # a connection, which needs to check the remote host key
  gather_facts: false

  tasks:
    - name: Check known_hosts for {{ inventory_hostname }}
      local_action: shell ssh-keygen -F {{ inventory_hostname }}
      register: has_entry_in_known_hosts_file
      changed_when: false
      ignore_errors: true
    - name: Ignore host key for {{ inventory_hostname }} on first run
      when: has_entry_in_known_hosts_file.rc == 1
      set_fact:
        ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
    # Now that we have resolved the issue with the host key
    # we can "gather facts" without issue
    - name: Delayed gathering of facts
      setup:

So we only turn off host key checking if we don't have the host key in our known_hosts file.

davidolrik
  • 414
  • 4
  • 10
  • 1
    This is a much better solution than setting `ANSIBLE_HOST_KEY_CHECKING=false`, thanks. Like you say, you only want to add _new_ host keys to be added, not to ignore the host checking altogether. – Per Lundberg Apr 21 '23 at 08:12
  • One caveat though: `ssh-keygen -F ` regretfully no longer works on Debian/Ubuntu, where `HashKnownHosts yes` is enabled by default. You must do `ssh-keygen -F ` instead, which may or may not (depending on your setup/SSH config) be much more complex to achieve unfortunately. :/ – Per Lundberg Apr 21 '23 at 09:30
  • In my case, I am provisioning a new host and needed to import the host key at first interaction. Instead of using `set_fact`, set `ansible_ssh_common_args: "-o StrictHostKeyChecking=no"` in the `vars:` section of the first task that opens an SSH connection. Doing so also registers the host key, allowing host key checking to happen normally in later tasks. – drewburr Jul 02 '23 at 04:08
12

You can pass it as command line argument while running the playbook:

ansible-playbook play.yml --ssh-common-args='-o StrictHostKeyChecking=no'

paresh patil
  • 121
  • 1
  • 3
10

If you don't want to modify ansible.cfg or the playbook.yml then you can just set an environment variable:

export ANSIBLE_HOST_KEY_CHECKING=False
Rene B.
  • 6,557
  • 7
  • 46
  • 72
9

forward to nikobelia

For those who using jenkins to run the play book, I just added to my jenkins job before running the ansible-playbook the he environment variable ANSIBLE_HOST_KEY_CHECKING = False For instance this:

export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook 'playbook.yml' \
--extra-vars="some vars..." \
--tags="tags_name..." -vv
dsaydon
  • 4,421
  • 6
  • 48
  • 52
7

Ignoring checking is a bad idea as it makes you susceptible to Man-in-the-middle attacks.

I took the freedom to improve nikobelia's answer by only adding each machine's key once and actually setting ok/changed status in Ansible:

- name: Accept EC2 SSH host keys
  connection: local
  become: false
  shell: |
    ssh-keygen -F {{ inventory_hostname }} || 
      ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
  register: known_hosts_script
  changed_when: "'found' not in known_hosts_script.stdout"

However, Ansible starts gathering facts before the script runs, which requires an SSH connection, so we have to either disable this task or manually move it to later:

- name: Example play
  hosts: all
  gather_facts: no  # gather facts AFTER the host key has been accepted instead

  tasks:

  # https://stackoverflow.com/questions/32297456/
  - name: Accept EC2 SSH host keys
    connection: local
    become: false
    shell: |
      ssh-keygen -F {{ inventory_hostname }} ||
        ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
    register: known_hosts_script
    changed_when: "'found' not in known_hosts_script.stdout"
  
  - name: Gathering Facts
    setup:

One kink I haven't been able to work out is that it marks all as changed even if it only adds a single key. If anyone could contribute a fix that would be great!

xjcl
  • 12,848
  • 6
  • 67
  • 89
  • It's possible running it with `serial: 1` for the task would get the changed tally to be accurate? Or get the key, then use lineinfile to template into known_hosts. (IMO, there are good use cases for disabling host key checking - my project does when standing up instances for unit tests, or in playbooks that will always rebuild cloud hosts) – nikobelia Jul 26 '20 at 18:36
  • @nikobelia No idea but I could try out `serial: 1` – xjcl Jul 26 '20 at 18:51
  • using `delegate_to: localhost` instead of `connection: local` seems to work better for me, tally looks accurate – Sam Brinck May 16 '23 at 18:56
5

You can simply tell SSH to automatically accept fingerprints for new hosts. Just add

StrictHostKeyChecking=accept-new

to your ~/.ssh/config. It does not disable host-key checking entirely, it merely disables this annoying question whether you want to add a new fingerprint to your list of known hosts. In case the fingerprint for a known machine changes, you will still get the error.

This policy also works with ANSIBLE_HOST_KEY_CHECKING and other ways of passing this param to SSH.

anemyte
  • 17,618
  • 1
  • 24
  • 45
2

Host key checking is important security measure so I would not just skip it everywhere. Yes, it can be annoying if you keep reinstalling same testing host (without backing up it's SSH certificates) or if you have stable hosts but you run your playbook for Jenkins without simple option to add host key if you are connecting to the host for a first time. So:

This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:

[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'

And this is what we have for temporary hosts (in the end this will ignore they host key at all):

[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'

There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.

Used some other responses here and a co-worker solution, thank you!

jhutar
  • 1,369
  • 2
  • 17
  • 32
1

I know the question has been answered and it's correct as well, but just wanted to link the ansible doc where it's explained clearly when and why respective check should be added: host-key-checking

justjais
  • 344
  • 3
  • 13
  • Here is the [updated documentation link](https://docs.ansible.com/ansible/latest/inventory_guide/connection_details.html#managing-host-key-checking) – drewburr Jul 02 '23 at 04:15
1

The most problems appear when you want to add new host to dynamic inventory (via add_host module) in playbook. I don't want to disable fingerprint host checking permanently so solutions like disabling it in a global config file are not ok for me. Exporting var like ANSIBLE_HOST_KEY_CHECKING before running playbook is another thing to do before running that need to be remembered.

It's better to add local config file in the same dir where playbook is. Create file named ansible.cfg and paste following text:

[defaults]
host_key_checking = False

No need to remember to add something in env vars or add to ansible-playbook options. It's easy to put this file to ansible git repo.

QkiZ
  • 798
  • 1
  • 8
  • 19
1

This one is the working one I used in my environment. I use the idea from this ticket https://github.com/mitogen-hq/mitogen/issues/753

- name: Example play
  gather_facts: no
  hosts: all
  tasks:
    - name: Check SSH known_hosts for {{ inventory_hostname }}
      local_action: shell ssh-keygen -l -F {{ inventory_hostname }}
      register: checkForKnownHostsEntry
      failed_when: false
      changed_when: false
      ignore_errors: yes
    - name: Add {{ inventory_hostname }} to SSH known hosts automatically
      when: checkForKnownHostsEntry.rc == 1
      changed_when: checkForKnownHostsEntry.rc == 1
      local_action: 
         module: shell
         args: ssh-keyscan -H "{{ inventory_hostname }}" >> $HOME/.ssh/known_hosts

Jack Liu Shurui
  • 540
  • 1
  • 5
  • 14
0

In case if you try to solve this for git:

There is a special module GIT in Ansible

It has paramter: accept_newhostkey

Working example:

 - name: Example clone of a single branch
  ansible.builtin.git:
    repo: git@bitbucket.org:hohoho/auparser.git
    dest: /var/www/auparser
    single_branch: yes
    version: master
    accept_newhostkey: true
Tebe
  • 3,176
  • 8
  • 40
  • 60
-1

Use the parameter named as validate_certs to ignore the ssh validation

- ec2_ami:
    instance_id: i-0661fa8b45a7531a7
    wait: yes
    name: ansible
    validate_certs: false
    tags:
      Name: ansible
      Service: TestService

By doing this it ignores the ssh validation process

Nitesh Jain
  • 167
  • 1
  • 3
  • 1
    The `validate_certs` parameter simply tells boto to not validate the AWS API HTTPS cert. It doesn't affect the SSH key verification. – Matthew Dutton Oct 17 '19 at 13:15