0

I've written a small playbook to run the sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product command and write the output in a result file by usign the following code as a .yml:

# Check if server is vmware
---
- name: Check if server is vmware
  hosts: all
  become: yes
  #ignore_errors: yes
  gather_facts: False
  serial:  50
  #become_flags: -i
  tasks:
    - name: Run uptime command
      #become: yes
      shell: "sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product"
      register: upcmd

    - debug:
        msg: "{{ upcmd.stdout }}"

    - name: write to file
      lineinfile:
        path: /home/myuser/ansible/mine/vmware.out
        create: yes
        line: "{{ inventory_hostname }};{{ upcmd.stdout }}"  
      delegate_to: localhost
      #when: upcmd.stdout != ""

When running the playbook against a list of hosts I get different weird results so even if the debug shows the correct output, when I check the /home/myuser/ansible/mine/vmware.out file I see only part of them being present. Even weirder is that if I run the playbook again, I will correctly populate the whole list but only if I run this twice. I have repeated this several times with some minor tweaks but not getting the expected result. Doing -v or -vv shows nothing unusual.

The output of the sudo dmidecode -t1 command:

# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.4 present.

Handle 0x0001, DMI type 1, 27 bytes
System Information
        Manufacturer: VMware, Inc.
        Product Name: VMware Virtual Platform
        Version: None
        Serial Number: VMware-42 15 27 29 d2 1d 0b 1e-ec 62 1a 6b a1 f2 af 11
        UUID: 12165229-d21d-0b0e-ec63-1b6aa2e7nf92
        Wake-up Type: Power Switch
        SKU Number: Not Specified
        Family: Not Specified
Cat Hariss
  • 123
  • 10

2 Answers2

1

You are writing to the same file in parallel on localhost. I suspect you're hitting a write concurrency issue. Try the following and see if it fixes your problem:

    - name: write to file
      lineinfile:
        path: /home/myuser/ansible/mine/vmware.out
        create: yes
        line: "{{ host }};{{ hostvars[host].upcmd.stdout }}"  
      delegate_to: localhost
      run_once: true
      loop: "{{ ansible_play_hosts }}"
      loop_control:
        loop_var: host
Zeitounator
  • 38,476
  • 7
  • 53
  • 66
  • This solution wrote on the vmware.out several times the same results – Cat Hariss Nov 09 '22 at 21:04
  • 1
    Please [edit] your question and add a complete [mre] based on this proposition containing the example data you have as an entry, a complete playbook to reproduce your problem, the command you used to execute that playbook, the exact result you get and the one you expect instead. – Zeitounator Nov 10 '22 at 08:30
  • 1
    The reproducible example is this one presented here: https://github.com/ansible/ansible/issues/30413 and I've found this is a known issue: https://stackoverflow.com/questions/45716225/lineinfile-module-of-ansible-with-delegate-to-localhost-doesnt-write-all-data-t – Cat Hariss Feb 25 '23 at 16:52
0

From your described case I understand that you like to find out "How to check if a server is virtual?"

The information will already be collected by the setup module.

---
- hosts: linux_host
  become: false
  gather_facts: true

  tasks:

  - name: Show Gathered Facts
    debug:
      msg: "{{ ansible_facts }}"

For an under MS Hyper-V virtualized Linux system, the output could contain

...
    bios_version: Hyper-V UEFI Release v1.0
...
    system_vendor: Microsoft Corporation
    uptime_seconds: 2908494
...
    userspace_architecture: x86_64
    userspace_bits: '64'
    virtualization_role: guest
    virtualization_type: VirtualPC

and having already the uptime in seconds included

uptime
... up 33 days ...

For just only a virtual check one could gather_subset resulting into a full output of

    gather_subset:
    - '!all'
    - '!min'
    - virtual
    module_setup: true
    virtualization_role: guest
    virtualization_type: VirtualPC

By Caching facts

... you have access to variables and information about all hosts even when you are only managing a small number of servers

on your Ansible Control Node. In ansible.cfg you can configure where and how they are stored and for how long.

fact_caching            = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout    = 86400 # seconds

This would be a minimal and simple solution without re-implementing functionality which is already there.

Further Documentation and Q&A

U880D
  • 8,601
  • 6
  • 24
  • 40
  • Thank you for the well documented info, I will definitely use it but for now I just need to find out why this method of writing to a file does not work as intended. – Cat Hariss Nov 09 '22 at 21:03