1

I'm using community.vmware.vmware_guest_powerstate collection for Ansible to start VMs.

The problem is the time it takes for 1 VM can be 2-5 sec, which makes its very inefficient when I want to start 50 VMs ...

Is there any way to make it in parallel?

The playbook:

- hosts: localhost
  gather_facts: false
  collections:
    - community.vmware
  vars:
    certvalidate: "no"
    server_url: "vc01.x.com"
    username: "{{ lookup('ansible.builtin.env', 'API_USER', default=Undefined) }}"
    password: "{{ lookup('ansible.builtin.env', 'API_PASS', default=Undefined) }}"
  tasks:
    - name: "setting state={{ requested_state }} in vcenter"
      community.vmware.vmware_guest_powerstate:
        username: "{{ lookup('ansible.builtin.env', 'API_USER', default=Undefined) }}"
        password: "{{ lookup('ansible.builtin.env', 'API_PASS', default=Undefined) }}"
        hostname: "{{ server_url }}"
        datacenter: "DC1"
        validate_certs: no
        name: "{{ item }}"
        state: "powered-on"
      loop: "{{ hostlist }}"

This is Ansible's output: (every line can take 2-5 sec ...)

TASK [setting state=powered-on in vcenter] ************************************************************************************************************
Monday 19 September 2022  11:17:59 +0000 (0:00:00.029)       0:00:08.157 ****** 
changed: [localhost] => (item=x1.com)
changed: [localhost] => (item=x2.com)
changed: [localhost] => (item=x3.com)
changed: [localhost] => (item=x4.com)
changed: [localhost] => (item=x5.com)
changed: [localhost] => (item=x6.com)
changed: [localhost] => (item=x7.com)
U880D
  • 8,601
  • 6
  • 24
  • 40
chenchuk
  • 5,324
  • 4
  • 34
  • 41
  • 2
    Try [async](https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html#asynchronous-actions-and-polling). – Vladimir Botka Sep 19 '22 at 13:14
  • @VladimirBotka async will just keep ansible running without waiting, but the process will still start them 1 by one ... (so last vm in a huge list - still suffers .... ) – chenchuk Oct 04 '22 at 12:14

2 Answers2

0

... the time it takes for 1 VM can be 2-5 sec, which makes its very inefficient when I want to start 50 VMs ...

Right, this is the usual behavior.

Is there any way to make it in parallel?

As already mentioned within the comments by Vladimir Botka, asynchronous actions and polling is worth a try since

By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges.

You see it in your case in the task and in a loop.

Probably the Best Practice to address the use case and to eliminate the cause is to enhance the module code.

According the documentation vmware_guest_powerstate module – Manages power states of virtual machines in vCenter and source ansible-collections/community.vmware/blob/main/plugins/modules/vmware_guest_powerstate.py, the parameter name: takes one name for one VM only. If it would be possible to provide a list of VM names "{{ hostlist }}" to the module directly, there would be one connection attempt only and the loop happening one the Remote Node instead of the Controller Node (... even if this is running localhost for both cases).

To do so one would need to start with name=dict(type='list') instead of str and implement all other logic, error handling and responses.

Further Documentation

Since the community vmware_guest_powerstate module is importing and utilizing additional libraries


Meanwhile and based on

Further Q&A and Tests

I've setup another short performance test to simulate the behavior you are observing

---
- hosts: localhost
  become: false
  gather_facts: false

  tasks:

  - name: Gather subdirectories
    shell:
      cmd: "ls -d /home/{{ ansible_user }}/*/"
      warn: false
    register: subdirs

  - name: Gather stats (loop) async
    shell: "stat {{ item }}"
    loop: "{{ subdirs.stdout_lines }}"
    loop_control:
      label: "{{ item }}"
    async: 5
    poll: 0

  - name: Gather stats (loop) serial
    shell: "stat {{ item }}"
    loop: "{{ subdirs.stdout_lines }}"
    loop_control:
      label: "{{ item }}"

  - name: Gather stats (list)
    shell: "stat {% raw %}{{% endraw %}{{ subdirs.stdout_lines | join(',') }}{% raw %}}{% endraw %}"
    register: result

  - name: Show result
    debug:
      var: result.stdout

and found that adding async will add some additional overhead resulting into even longer execution time.

Gather subdirectories ------------------------ 0.57s
Gather stats (loop) async -------------------- 3.99s
Gather stats (loop) serial ------------------- 3.79s
Gather stats (list) -------------------------- 0.45s
Show result ---------------------------------- 0.07s

This is because of the "short" runtime of the executed task in comparison to "long" time establishing a connection. As the documentation pointed out

For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.

one may take advantage from async in case of long running processes and tasks.

In respect the given answer from @Sonclay I've performed another test with

---
- hosts: all
  become: false
  gather_facts: false

  tasks:

  - name: Gather subdirectories
    shell:
      cmd: "ls -d /home/{{ ansible_user }}/*/"
      warn: false
    register: subdirs
    delegate_to: localhost

  - name: Gather stats (loop) serial
    shell: "stat {{ item }}"
    loop: "{{ subdirs.stdout_lines }}"
    loop_control:
      label: "{{ item }}"
    delegate_to: localhost

whereby a call with

ansible-playbook -i "test1.example.com,test2.example.com,test3.example.com" --forks 3 test.yml

will result into an execution time of

Gather subdirectories ------------------------ 0.72s
Gather stats (loop) -------------------------- 0.39s

so it seems to be worth a try.

U880D
  • 8,601
  • 6
  • 24
  • 40
0

try this instead...

- hosts: all
  gather_facts: false
  collections:
    - community.vmware
  vars:
    certvalidate: "no"
    server_url: "vc01.x.com"
    username: "{{ lookup('ansible.builtin.env', 'API_USER', default=Undefined) }}"
    password: "{{ lookup('ansible.builtin.env', 'API_PASS', default=Undefined) }}"
  tasks:
    - name: "setting state={{ requested_state }} in vcenter"
      community.vmware.vmware_guest_powerstate:
        username: "{{ username }}"
        password: "{{ password }}"
        hostname: "{{ server_url }}"
        datacenter: "DC1"
        validate_certs: no
        name: "{{ inventory_hostname }}"
        state: "powered-on"
      delegate_to: localhost

Then run it with your hostlist as the inventory and use forks:

ansible-playbook -i x1.com,x2.com,x3.com,... --forks 10 play.yml

Sonclay
  • 1
  • 1