3

On an isolated network (without internet access to do public IP address lookups), I want to run a playbook from a controller against a number of target hosts where one of the tasks is to download a file via HTTP/HTTPS from the controller without hard-coding the controller IP as part of the task. E.g.

Controller: 192.168.0.5
Target 1: 192.168.0.10
Target 2: 192.168.0.11
Target 3: 192.168.0.12

The controller can have different IPs configured via DHCP, and there could be multiple network interfaces listed in ansible_all_ipv4_addresses (some of which may not be available to the target hosts) so it may not be straight forward to determine which network interface the target hosts should use from ansible_facts on localhost without exploring the idea of looping through them with a timeout until the file has been downloaded. It seems as though the most robust way to determine the public IP of the controller (assuming the web server is listening on 0.0.0.0) would be to determine the originating IP of the established connection (192.168.0.5) from the target host - is there a way to do this?

The motivation for downloading the file from the controller rather than sending it to remote hosts is that some of the target hosts are running Windows and the win_copy module is incredibly slow via WinRM so the Ansible documentation includes the following note:

Because win_copy runs over WinRM, it is not a very efficient transfer mechanism. If sending large files consider hosting them on a web service and using ansible.windows.win_get_url instead.

gratz
  • 1,506
  • 3
  • 16
  • 34
  • 1
    I understand that you like forward the IP address of the Control Node to the Manged Nodes in order to make the Managed Nodes capable to download files from the Control Node. – U880D Jun 01 '23 at 16:25
  • 2
    Can you share more information, in example your inventory or your playbook, or show a minimal example of what you are doing actually? – U880D Jun 01 '23 at 16:25

2 Answers2

1

Limited test on my machine which has a single ip and with a single target. But I don't see why it would not work in your scenario.

Given the following inventories/default/hosts.yml

all:
  hosts:
    target1:
      ansible_host: 192.168.0.10
    target2:
      ansible_host: 192.168.0.11
    target3:
      ansible_host: 192.168.0.12

The following test playbook should do what you expect. Replace the dummy debug task with get_url/uri to initiate the download.

Notes:

  • this playbook infers you have access to the ip command line tool on the controller.
  • I took for granted that the controller IP used to connect to target is the one that the target has access to in the other direction. If this isn't the case then the below will not work in your situation.
---
- hosts: all
  gather_facts: false

  tasks:
    - name: Check route on controller for each target destination
      ansible.builtin.command: ip route get {{ ansible_host }}
      register: route_cmd
      delegate_to: localhost

    - name: Register the controller outgoing ip for each target
      ansible.builtin.set_fact:
        controller_ip: "{{ route_cmd.stdout_lines[0] | regex_replace('^.* src (\\d*(\\.\\d*){3}).*$', '\\1') }}"

    - name: Show result
      ansible.builtin.debug:
        msg: "I would connect from target {{ inventory_hostname }} ({{ ansible_host }}) 
          to controller using ip {{ controller_ip }}"
Zeitounator
  • 38,476
  • 7
  • 53
  • 66
  • Thanks for the reply. The first task errors with `Error: any valid prefix is expected rather than "localhost".`, it seems Ubuntu doesn't have a localhost entry there...you can get a list of addresses with `ip route show table local`, but these are the same addresses that are accessible from `ansible_all_ipv4_addresses` and doesn't help determine which is the NIC/address that's used for the outbound connection to the target hosts – gratz Jun 02 '23 at 10:37
  • 1
    Why to you have localhost in your inventory? This is what is causing the issue. You do not want localhost matching the `all` group, at least not in the above case. For my example to work you need only remote targets and each needs to have a `ansible_host` entry which is an IP (which is your current setup if I understood correctly). – Zeitounator Jun 02 '23 at 11:41
  • Ahh I see what you mean, I don't have localhost in my inventory but failed to spot that you had specified ansible_host for each target host so the idea is that it determines the route (outgoing address) for each host regardless of which network they're on. I'll give this a try! – gratz Jun 02 '23 at 12:30
  • 1
    Note that if you are sure that all targets "see" the controller with the same ip, you can add `run_once: true` on the two first tasks. – Zeitounator Jun 02 '23 at 12:52
0

It seems as though the most robust way to determine the public IP of the Control Node ... would be to determine the originating IP of the established connection ... on the Manged Node - Is there a way to do this?

Yes, of course, at least if the Managed Nodes are Linux only. See in example Find the IP address of the client in an SSH session, so a minimal example playbook

---
- hosts: test
  become: false
  gather_facts: false

  tasks:

  - name: Gather Control Node Connection IP
    shell:
      cmd:  who | cut -d "(" -f2 | cut -d ")" -f1
    changed_when: false
    check_mode: false
    register: connected_from

  - name: Show Connected From
    debug:
      msg: "{{ connected_from.stdout }}"

or even simpler and if Ansible facts are gathered

---
- hosts: test
  become: false
  gather_facts: true
  gather_subset:
    - "env"
    - "!all"
    - "!min"

  tasks:

  - name: Show Connected From
    debug:
       msg: "{{ ansible_env.SSH_CONNECTION.split(' ') | first }}"
    # when: ansible_os_family != 'Windows' # and if more facts were gathered

would result into the requested information.

TASK [Show Connected From] ******
ok: [test.example.com] =>
  msg: 192.0.2.1

Background Information

The approach will not work for a Managed Node running a Windows OS and a WinRM Connection, see also

U880D
  • 8,601
  • 6
  • 24
  • 40