2

I want to execute a Popper workflow on a Linux HPC (High-performance computing) cluster. I don’t have admin/sudo rights. I know that I should use Singularity instead of Docker because Singularity is designed to not need sudo to run.

However, singularity build needs sudo privileges, if not executed in fakeroot/rootless mode.


This is what I have done in the HPC login node:

  1. I installed Spack (0.15.4) and Singularity (3.6.1):
git clone --depth=1 https://github.com/spack/spack.git
. spack/share/spack/setup-env.sh
spack install singularity
spack load singularity
  1. I installed Popper (2.7.0) in a virtual environment:
python3 -m venv ~/popper
~/popper/bin/pip install popper
  1. I created an example workflow in ~/test/wf.yml:
steps:
  - uses: "docker://alpine:3.11"
    args: ["echo", "Hello world!"]
  - uses: "./my_image/"
    args: ["Hello number two!"]

With ~/test/my_image/Dockerfile:

FROM alpine:3.11
ENTRYPOINT ["echo"]
  1. I tried to run the two steps of the Popper workflow in the login node:
$ cd ~/test
$ ~/popper/bin/popper run --engine singularity --file wf.yml 1
[1] singularity pull popper_1_4093d631.sif docker://alpine:3.11
[1] singularity run popper_1_4093d631.sif ['echo', 'Hello world!']
ERROR  : Failed to create user namespace: user namespace disabled
ERROR: Step '1' failed ('1') !

$ ~/popper/bin/popper run --engine singularity --file wf.yml 2
[2] singularity build popper_2_4093d631.sif /home/bikfh/traylor/test/./my_image/
[sudo] password for traylor:

So both steps fail.


My questions:

  • For an image from Docker Hub: How do I enable “user namespace”?
  • For a custom image: How do I build an image without sudo and run the container?
traylor
  • 31
  • 4

1 Answers1

1

For an image from Docker Hub: How do I enable “user namespace”?

I found that the user namespace feature needs to be already enabled on the host machine. Here are instructions for checking whether it’s enabled.

In the case of the cluster computer I am using (Frankfurt Goethe HLR), user namespaces are only enabled in the computation nodes, not the login node. That’s why it didn’t work for me.

So I need to send the job with SLURM (here only the first step with a container from Docker Hub):

 ~/popper/bin/popper run --engine singularity --file wf.yml --config popper_config.yml 1

popper_config.yml defines the options for SLURM’s sbatch (compare the Popper docs). They depend on your cluster computer. In my case it looks like this:

resource_manager:
  name: slurm
  options:
    "1": # The default step ID is a number and needs quotes here.
      nodes: 1
      mem-per-cpu: 10 # MB
      ntasks: 1
      partition: test
      time: "00:01:00"

For a custom image: How do I build an image without sudo and run the container?

Trying to apply the same procedure to step 2, which has a custom Dockerfile, fails with this message:

FATAL:   could not use fakeroot: no mapping entry found in /etc/subuid

I tried to create the .sif file (Singularity image) with Popper on another computer and copy it from ~/.cache/popper/singularity/... over to the cluster machine. Unfortunately, Popper seems to clear that cache folder, so the .sif image doesn’t persist.

traylor
  • 31
  • 4