2

I can do it separate but cannot combine them together, since I don't know disk device name.

My configuration:

- name: Create Virtual Machine
  azure_rm_virtualmachine:
  resource_group: "{{ resource_group }}"
  name: "{{ item }}"
  vm_size: "{{ flavor }}"
  managed_disk_type: "{{ disks.disk_type }}"
  network_interface_names: "NIC-{{ item }}"
  ssh_password_enabled: false
  admin_username: "{{ cloud_config.admin_username }}"
  image:
    offer:  "{{ image.offer }}"
    publisher: "{{ image.publisher }}"
    sku: "{{ image.sku }}"
    version: "{{ image.version }}"
  tags: 
    Node: "{{ tags.Node }}"
  ssh_public_keys:
    - path: "/home/{{ cloud_config.admin_username }}/.ssh/authorized_keys"
      key_data: "{{ cloud_config.ssh.publickey }}"
  data_disks:
    - lun: 0
      disk_size_gb: "{{ disks.disk_size }}"
      caching: "{{ disks.caching }}"
      managed_disk_type: "{{ disks.disk_type }}"

Other part to format and mount the disk

- name: partition new disk
  shell: 'echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdc'
  args:
    executable: /bin/bash

- name: Makes file system on block device
  filesystem:
    fstype: xfs
    dev: /dev/sdc1

- name: new dir to mount
  file: path=/hadoop state=directory

- name: mount the dir
  mount:
    path: /hadoop
    src: /dev/sdc1
    fstype: xfs
    state: mounted

My question: device name cannot be configured. It can be /dev/sdc or /dev/sdb. For AWS ec2, I can set volumes[device_name], But I don't find such field in Azure. How could I fix it?

Roy Zeng
  • 511
  • 7
  • 10

4 Answers4

1

/dev/sdb are used for temporary disk by default, but sometimes it was used by my data disk. I found a workaround to check device name before format. I know it's not a smart way.

- name: check device name which should be parted
  shell: parted -l
  register: device_name

- name: Show middle device name 
  debug:
    msg: "{{ device_name.stderr.split(':')[1] }}"
  register: mid_device

- name: Display real device name 
  debug: 
    msg: "{{ mid_device.msg.split()[0] }}"
  register: real_device

- name: partition new disk
  shell: 'echo -e "n\np\n1\n\n\nw" | fdisk {{ real_device.msg }}'
  args:
    executable: /bin/bash

- name: Makes file system on block device
  filesystem:
    fstype: xfs
    dev: "{{ real_device.msg }}1"

- name: new dir to mount
  file: path=/hadoop state=directory

- name: mount the dir
  mount:
    path: /hadoop
    src: "{{ real_device.msg }}1"
    fstype: xfs
    state: mounted
Roy Zeng
  • 511
  • 7
  • 10
1

We can use softlink rather than /dev/sdb to format data disk, the link was located in /dev/disk/azure.

You can run command "tree /dev/disk/azure" to know the detail structure.

Here is my example to format one data disk, if there are more disks, you can change the softlink to be like /dev/disk/azure/scsi1/lun1, /dev/disk/azure/scsi1/lun2, /dev/disk/azure/scsi1/lun3...

- name: use parted to make label
  shell: "parted /dev/disk/azure/scsi1/lun0 mklabel msdos"
  args:
    executable: /bin/bash

- name: partition new disk
  shell: "parted /dev/disk/azure/scsi1/lun0 mkpart primary 1 100%"
  args:
    executable: /bin/bash

- name: inform the OS of partition table changes (partprobe)
  command: partprobe

- name: Makes file system on block device with xfs file system
  filesystem:
    fstype: xfs
    dev: /dev/disk/azure/scsi1/lun0-part1

- name: create data dir for mounting
  file: path=/data state=directory

- name: Get UUID of the new filesystem
  shell: |
    blkid -s UUID -o value $(readlink -f /dev/disk/azure/scsi1/lun0-part1)
  register: uuid

- name: show real uuid
  debug:
    msg: "{{ uuid.stdout }}"

- name: mount the dir
  mount:
    path: /data
    src: "UUID={{ uuid.stdout }}"
    fstype: xfs
    state: mounted

- name: check disk status
  shell: df -h | grep /dev/sd
  register: df2_status

- debug: var=df2_status.stdout_lines
Roy Zeng
  • 511
  • 7
  • 10
0

Maybe can try the azure_rm_managed_disk module and then attach it to VM. Then you have all the properties of the disk.

yuizhou
  • 55
  • 1
  • 7
0

If you need LVM...

- name: Mount disks with logical volume management
  block:
    - name: Add disks to logical volume group
      community.general.lvg:
        vg: "{{ my_volume_group }}"
        pvs: "{{ my_physical_devices }}"

    - name: Manage logical volume
      community.general.lvol:
        vg: "{{ my_volume_group }}"
        lv: "{{ my_logical_volume }}"
        size: "{{ my_volume_size }}"

    - name: Manage mount point
      ansible.builtin.file:
        path: "{{ my_path }}"
        state: directory
        mode: 0755

    - name: Manage file system
      community.general.filesystem:
        dev: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
        fstype: "{{ my_fstype }}"

    - name: Mount volume
      ansible.posix.mount:
        path: "{{ my_path }}"
        state: mounted
        src: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
        fstype: "{{ my_fstype }}"
        opts: defaults,nodev
bbaassssiiee
  • 6,013
  • 2
  • 42
  • 55