1

I am trying to get xeus-cling to work on a OCI image, currently I am using buildah + podman. I run into two problems

  1. I try to create an environment with mamba/conda, however it needs conda/mamba init bash too run then to restart the shell. But its hard to get it to restart while its building, I have tried building multistage images, exit, running /bin/bash. I noticed conda checks too see if certain files are configured in a certain way, including /home/joyvan/.bashrc, I cat'd out the modified .bashrc and COPY'd it too the image -- no dice. activate tells me I need to run init
  2. I have tried installing it without the environment, I keep getting the error
Encountered problems while solving:
 - nothing provides system needed by clangdev-5.0.0-default_0

I don't know which package specifically clangdev-5.0.0-default_0 is in (hence a bunch of commented out C++ relevant packages in the Dockerfile)

I have even got notebook to run a couple of times (cant remember what I did) but did not see the option to create a C++ notebook. I am wondering if this may be due to the fact that

I have put my Dockerfile in its current state below (I went a little crazy with Ctrl+Z)

Thank you for reading

  • TFB :)
FROM docker.io/jupyter/scipy-notebook:latest

SHELL [ "/bin/bash", "-c" ]

RUN mamba install -y xeus -c conda-forge
RUN mamba install -y jupyterlab -c conda-forge

# RUN conda install gcc7 -c conda-forge
# RUN mamba install -y -c conda-forge clangdev
# RUN mamba install -y -c conda-forge/label/llvm_rc clangdev
# RUN mamba install -y -c conda-forge/label/cf202003 clangdev
# RUN mamba install -y -c conda-forge/label/gcc7 clangdev
# RUN mamba install -y -c conda-forge/label/broken clangdev
# RUN mamba install -y -c conda-forge/label/cf201901 clangdev


RUN mamba install -y -c conda-forge jupyter_contrib_nbextensions 
# RUN conda init bash
# RUN conda create -n cling
# RUN conda activate cling
RUN mamba install -y xeus-cling -c conda-forge
BMitch
  • 231,797
  • 42
  • 475
  • 450

2 Answers2

1

Starting from the same image, a minimal working example of Jupyter with xeus-cling kernel capabilities is:

Dockerfile

FROM docker.io/jupyter/scipy-notebook:latest

RUN mamba install -yn base nb_conda_kernels \
    && mamba create -yn xeus-cling xeus-cling \
    && mamba clean -qafy

Build and run

docker build -t jupyter-xeus:latest .
docker run -p 8888:8888 jupyter-xeus:latest

Then, from Jupyter I can create C++11, C++14, and C++17 kernels. Here is a C++14 one after running some trivial cells:

enter image description here

Additional Notes

These images have Jupyter installed in the base environment. If you want alternative kernels installed in other environments to get automatically picked up, we add nb_conda_kernels to the base.

The xeus-cling is then installed to a separate Conda environment.

One should almost always chain Docker RUN commands, since this avoids unnecessarily generating intermediates.

If you want additional software available in an environment, say boost in the xeus-cling, then include that in the creation - do not run multiple conda install commands!. E.g.,

RUN mamba install -yn base nb_conda_kernels \
    && mamba create -yn xeus-cling xeus-cling boost \
    && mamba clean -qafy

The mamba clean -qafy helps minimize the size of the image.

merv
  • 67,214
  • 13
  • 180
  • 245
-1

There is a proper answer above, but I just wanted to post that I also have a "hack". You may have noticed if you enter the juypter interface there is a New -> Terminal option. Installing xues-cling worked on this terminal, so I just used it to install xues-cling, it wound up working, it can probably (and preferably) be done with mamba, but some of my other hacks at the time worked with conda

FROM docker.io/jupyter/scipy-notebook:latest

RUN pip install jupyter-console
RUN conda create -y -n xeus-cling
RUN jupyter console source activate xeus-cling
RUN conda install -y -c conda-forge xeus-cling
  • This doesn’t do what you think. Every Docker RUN command is a new shell, so this is still installing into **base** – merv Feb 04 '22 at 03:12