I fail to use mcli
inside a kubernetes cluster while the same container works locally through podman
.
For this I use a gitlab-runner into a kubernetes cluster.
$ id
uid=0(root) gid=0(root) groups=0(root)
$ ks-libvirt --help
Can't exec "nmcli": No such file or directory at (eval 61) line 149.
Can't open 'nmcli' with mode '-|': 'No such file or directory' at /usr/bin/ks-libvirt line 537
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
The gitlab-runner is configured with privileged
and this volume is mounted /var/run/dbus
bellow a part of the configuration:
## Configure integrated Prometheus metrics exporter
## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
metrics:
enabled: true
service:
enabled: true
serviceMonitor:
enabled: true
## Configuration for the Pods that that the runner launches for each new job
##
runners:
## Default container image to use for builds when none is specified
##
image: rockylinux:8.5
privileged: true
tags: "privileged,large"
runUntagged: false
## Configure environment variables that will be injected to the pods that are created while
## the build is running. These variables are passed as parameters, i.e. ,
## to command.
##
## Note that (see below) are only present in the runner pod, not the pods that are
## created for each build.
##
## ref: https://docs.gexport NAMESPACE="gitlab.com/runner/commands/#gitlab-runner-register"
##
env:
HOME: /tmp
config: |
[[runners]]
[runners.kubernetes]
hostNetwork = true
privileged = true
# build container
cpu_limit = "2"
memory_limit = "5Gi"
# service containers
service_cpu_limit = "1"
service_memory_limit = "1Gi"
# helper container
helper_cpu_limit = "1"
helper_memory_limit = "1Gi"
[runners.kubernetes.volumes]
[[runners.kubernetes.volumes.host_path]]
name = "var-dbus"
host_path = "/var/run/dbus"
mount_path = "/var/run/dbus"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "run-dbus"
host_path = "/run/dbus"
mount_path = "/run/dbus"
read_only = false
While with podman
+ root
+ privileged
+ --net host
that works
# podman run --rm --name qcow --net host --privileged -v /var/run/dbus:/var/run/dbus -it localhost/fedora_qcow /bin/bash
...
[root@container /]# ks-libvirt --help
Usage:
ks-libvirt [options] kickstart-file
At the end of install, if the VM is not shut down with --off and the
guest agent is not excluded with --noaddga, the script waits until the
VM is up and an IPv4 address is configured; it will clean any previous
SSH host keys for that IP and then print the IP, so if you have an SSH
key defined, you can do:
ssh -l root $(ks-libvirt kickstart-file)
Options:
--addga | -a
Add qemu-guest-agent to %packages; default is to do this, use
--noaddga to disable. Without the agent, the hypvervisor cannot
get the IP of the VM (or do other VM management).
--anaconda | -A arguments
Additional anaconda boot arguments
--config | -C config
Config file for defaults; default is $HOME/.virtinst.cf
--cpu | -c count
VM CPU cores; default is 1
--disk | -d GB
VM disk size in gigabytes; default is 6
--disk2 GB
VM second disk size in gigabytes; default is to not use a second
disk (this is mostly just useful for testing kickstart RAID
handling)
--dns DNS-IPs
Set the DNS server(s) (can be specified more than once for
multiple servers); default: copy host DNS config when IPv4
address is set
--dumpks | -D
Generate a modified kickstart file and dump to standard out
(don't build VM)
--gw IPv4-gateway
Set the IPv4 gateway
--hostname | -h FQDN
Set the hostname; default is to not set unless network is set,
then use the VM name
--ip IPv4-address/mask
Set the IPv4 address and netmask (in bits, e.g. 10.0.0.1/24);
default is to try DHCP (if network needed)
--iso | -i ISO
ISO to boot from; default is pulled from KS or to use URL
instead. Handles a local ISO file (will be uploaded to same pool
as VM storage if needed), or pool/volume for an ISO already in a
storage pool.
--libvirt | -l URL
Connection to libvirt; default is $VIRTSH_DEFAULT_CONNECT_UID or
qemu:///system
--mapfile | -m file
URL map file to use different source repos. The format of the
file is one entry per line with a pair of URLs separated by a
space. The first URL is the original (which can be a mirrorlist
or metalink) followed by a target URL to replace it with
(mirrorlist/metalink are turned into direct url entries). The
default is $HOME/.virtinst-map
--name | -n name
VM name; default is KS file name minus any leading "ks-"
--net | -N interface
Bridge network interface to attach to; default is interface with
default route
--off | -O
Leave the VM off at the end of install
--pool | -p pool
Storage pool name; use pool default by default
--os | -o OS
OS name, used to set VM hardware config; default is autodetect
--quiet | -q
Be very quiet - only show errors and IP at end
--ram | -r MB
VM RAM size in megabytes; default is 2048 unless specified in
the KS
--screen | -s
Open the VM console screen during install
--secureboot | -B
Enable Secure Boot (implies UEFI).
--securepath [path]
Specify the path to the Secure Boot loader/NVRAM files (default
is /usr/share/edk2/ovmf)
--serial | -S
Add a serial console; default is to do this, use --noserial to
disable
--ssh Add found SSH key(s) to the installed system; default is to do
this, use --nossh to disable
--tpm Add TPM device
--uefi | -u
Use UEFI boot instead of BIOS
--vdelete
Delete an existing VM with the same name before creating new
(NOTE: will not ask for confirmation!)
--verbose | -v
Be more verbose
--virtinst | -V arguments
Additional virt-install arguments (can be used more than once)