I have an on-prem v1.24.6 Kubernetes cluster with selinux enabled on worker nodes (set to permissive), which are running containerd version 1.6.15.
[root@master-1 ~]# kubelet --version
Kubernetes v1.24.6
[root@master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-1 Ready control-plane 2d4h v1.24.6
vm1 Ready <none> 2d4h v1.24.6
vm2 Ready <none> 2d4h v1.24.6
vm3 Ready <none> 2d4h v1.24.6
[root@master-1 dummy]# crictl --version
crictl version v1.24.0
I am trying to launch pod with a container having custom selinux type labels but the labels are not getting applied on the container after deploying the pod.
The manifest for my pod:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
nodeName: vm1
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: 172.27.45.76:5000/busybox:1.28
command: [ "sh", "-c", "sleep 10h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
runAsUser: 4000
allowPrivilegeEscalation: false
seLinuxOptions:
type: dummy_container.process
I am using crictl inspect to get the JSON file for my running container(s), use udica tool to generate the appropriate selinux policy .cil file and applying the custom policy module using "semodule -i". Then I am relaunching the containers after applying the label suggested by udica.
The container is running with generic selinux label (spc_t or container_t) instead of the specified label i.e. "dummy_container.process" in this case.
I created a test container using podman, used podman inspect to get the JSON file for my container and used udica tool to generate the appropriate selinux policy .cil file.
[root@master-1 dummy]# podman run --env container=podman -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -d -it ubi8 bash
[root@master-1 dummy]# podman inspect 1765497df297 > test_container.json
[root@master-1 dummy]# udica -j test_container.json test_container
Policy test_container created!
Please load these modules using:
# semodule -i test_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}
Restart the container with: "--security-opt label=type:test_container.process" parameter
[root@master-1 dummy]# ls test_container.*
test_container.cil test_container.json
Then I stopped the container, applied the custom policy module and relaunched the container and can observe that the process is running with the expected label.
[root@master-1 dummy]# semodule -i test_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}
[root@master-1 dummy]# podman run --security-opt label=type:test_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -d -it ubi8 bash
[root@master-1 dummy]# ps -efZ | grep -i test
system_u:system_r:test_container.process:s0:c275,c294 root 46403 46391 0 Mar04 pts/0 00:00:00 bash
So is the problem with the container runtime or something else?