9

I have jumped around for some time now to solve this, and I cannot seem to get it working. I have a docker container where I set up an nvidia image for machine learning. I install all python dependencies. I then start with the pip package installations. I get the first error:

requests.exceptions.SSLError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))

Simple enough I have a certificate to deal with Cisco umbrella. I can then install all packages nice and easy. However to be able to install newest packages I need to upgrade pip, and upgrading works fine. After pip is upgraded to 20.2.3 I suddenly get an error again:

Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)) - skipping

I have then googled around and tried the suggestions I stumbled upon:

Timing

I found that the system time was wrong - it worked for the initial pip version, which was weird. However changing the time did not help the issue.

conf

I added a pip.conf file with global tags for trusted hosts and for certifications. Still the same error persists.

pip install

I have tried with different trusted host flags and also the cert flag, which should already be specified from the conf file - if I understand it correctly. Nevertheless, neither method worked.

What to do

I am kind of at a loss right now, installing the certificate in the container allows me to install packages with pip 9.0.1 (default in the system) after upgrading to pip 20.2.3. I cannot get it to work with any package. I have tried multiple pip versions - but as soon as I upgrade I lose the certificate trying to reinstall it with

ADD Cisco_Umbrella_Root_CA.cer /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt
RUN chmod 644 /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt
RUN update-ca-certificates --fresh

Anybody has an idea how this can happen?

UPDATE

Curl

 RUN curl -v -k -H"Host; files.pythonhosted.org" https://files.pythonhosted.org/packages/8a/fd/bbbc569f98f47813c50a116b539d97b3b17a86ac7a309f83b2022d26caf2/Pillow-6.2.2-cp36-cp36m-manylinux1_x86_64.whl
  ---> Running in ac095828b9ec
   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                  Dload  Upload   Total   Spent    Left  Speed
   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying ::ffff:146.112.56.166...
 * TCP_NODELAY set
 * Connected to files.pythonhosted.org (::ffff:146.112.56.166) port 443 (#0)
 * ALPN, offering h2
 * ALPN, offering http/1.1
 * successfully set certificate verify locations:
 *   CAfile: /etc/ssl/certs/ca-certificates.crt
   CApath: /etc/ssl/certs
 } [5 bytes data]
 * TLSv1.3 (OUT), TLS handshake, Client hello (1):
 } [512 bytes data]
 * TLSv1.3 (IN), TLS handshake, Server hello (2):
 { [85 bytes data]
 * TLSv1.2 (IN), TLS handshake, Certificate (11):
 { [3177 bytes data]
 * TLSv1.2 (IN), TLS handshake, Server finished (14):
 { [4 bytes data]
 * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
 } [262 bytes data]
 * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
 } [1 bytes data]
 * TLSv1.2 (OUT), TLS handshake, Finished (20):
 } [16 bytes data]
 * TLSv1.2 (IN), TLS handshake, Finished (20):
 { [16 bytes data]
 * SSL connection using TLSv1.2 / AES256-GCM-SHA384
 * ALPN, server did not agree to a protocol

From the last line it can be seen that they do not agree on protocol and the communication fails

JTIM
  • 2,774
  • 1
  • 34
  • 74
  • Would you be able to provide a minimal reproduction repository? – concision Oct 15 '20 at 08:36
  • @concision, does not seem to be able to do it. I have tried to do it, and tested it on my private PC without any issues. It seems like some network issue in my view. But the IT department says it is not :) So that is why I wanted to empty out the options, if someone had an idea. – JTIM Oct 15 '20 at 09:33
  • What is the base image that you are using (i.e. the one in the `FROM` command)? Perhaps try installing the `ca-certificates` Linux package with whatever the base image's Linux package manager is? – concision Oct 15 '20 at 09:57
  • @concision This is my `FROM "nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04"` so it must be apt-get right? As far as I know the steps to install it is done here in this post. Or do you have a specific method to install a certificate? – JTIM Oct 15 '20 at 10:23
  • Was thinking `apt-get -y update && apt-get install -y ca-certificates`, but it looks like that base image already has `ca-certificates` installed, so that is ruled out. Not sure if it is related or if you are even able to upgrade the base image, but it does seem like the tag you have selected is [now marked as unsupported](https://gitlab.com/nvidia/container-images/cuda/blob/master/doc/unsupported-tags.md). I'm not seeing `pip` or `python` in this image, are you manually installing an older version and then upgrading it? – concision Oct 15 '20 at 10:41
  • Since the SSL verification for `pypi.org` is failing, check the SSL certificate from the network that is experiencing the issue by running the following command inside the image: `openssl s_client -showcerts -servername pypi.org -connect pypi.org:443 2>/dev/null | openssl x509 -noout -text`. The organization issuer should be `O = DigiCert Inc` with several other references to `digicert`. If you are seeing otherwise, that could be the underlying problem. – concision Oct 15 '20 at 11:07
  • @concision You are correct - I am installing python and pip after the image. I am installing the python3 and it gives me the old pip version. But, as I said, I can install packages with that and upgrade pip. After the upgrade the ssl error starts. Hmm, unsupported I cannot see that on the link. Only for ubuntu 14.04 with my version is unsupported. Am I misreading it? Regarding the command you want I get digicert as the O and it is mentioned 9 times in the output so I would say that is acceptable right? – JTIM Oct 15 '20 at 11:54
  • My apologies - looks like I misread it since the tag names are similar. Try invoking this following command: `apt-get -y update && apt-get install -y curl && curl https://pypi.org/simple/pip/`. If there is an SSL issue in the entire container, `curl` might dump more verbose output that can help narrow down the issue. If `curl` succeeds, then it is likely some issue with just `pip`. I noticed there was a warning on `pip`'s install page about using a Python install managed by the OS, https://pip.pypa.io/en/stable/installing/. Might possibly be of some significance? – concision Oct 15 '20 at 19:20
  • @concision I had done one Curl test before. But I added the output here now. – JTIM Oct 16 '20 at 08:29
  • 2
    *From the last line it can be seen that they do not agree on protocol and the communication fails* – not really. Looking at https://github.com/curl/curl/issues/2749 this is not an error but merely an information. It seems the server did not accept h2 and http/1.1 which were offered. In this case probably http/1.0 or even http/0.9 might have been negotiated in the end. – Piotr Dobrogost Oct 16 '20 at 12:14

4 Answers4

1

Some time ago I ran into a similar problem. The solution for me was to add the cert and install dependencies in one docker layer.

I don't know how your Dockerfile looks exactly, but I'd try something like this:

ADD Cisco_Umbrella_Root_CA.cer /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt
RUN chmod 644 /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt && \
    update-ca-certificates --fresh && \
    pip install --upgrade pip setuptools && \
    pip install -r production.txt && \
    rm /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt  # for extra safety

For reference what I do:

RUN mkdir -p -m 0600 ~/.ssh/ && \
    ssh-keyscan <my host> >> ~/.ssh/known_hosts && \
    eval `ssh-agent -s` && \
    ssh-add <ssh key> && \
    echo "Installing packages from pip. It might take a few minutes..." && \
    pip install --upgrade pip setuptools && \
    pip install -r production.txt && \
    rm <ssh key>

Where ssh key is already chmod 400 <ssh key> from another layer.

Also, make sure to

  • apt update AND
  • apt install -y ca-certificates OR
  • apt upgrade
pygeek
  • 7,356
  • 1
  • 20
  • 41
Tom Wojcik
  • 5,471
  • 4
  • 32
  • 44
  • Thank you so much for the input, however I keep getting the same issue. I have done the following -> apt-get installations and creating the image and upgrading pip package is done in my `baselayer`. I then use an amended version of your first suggestion in the `secondlayer` to install my requirements file, nevertheless the same https error is emitted `WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)': /package/....` – JTIM Oct 16 '20 at 12:26
1

The steps suggested in the answer and in my question are definitely what one should try. If someone cannot make it working, like me, then in this specific instance it was the IT organisation who had set the information to be proxied to umbrella, and it didn't supprt the ssl scanning/decryption.

JTIM
  • 2,774
  • 1
  • 34
  • 74
0

For a docker build docker questions you REALLY need to show most of the dockerfile.

The detail above seems to indicate the docker file would have

FROM nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04
RUN set -ex \
  && apt update \
  && apt upgrade \
  && apt install -y curl python-pip
  && pip install --upgrade pip setuptools

Without the dockerfile there isn't a starting point and the only answer that can be given is "you seem to have a network problem". When I tried the above everything worked fine.

Using Curl within the container, the ssl cert I received was

* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Fastly, Inc; CN=r.ssl.fastly.net
*  start date: Jul 20 18:19:08 2020 GMT
*  expire date: Apr 28 19:20:25 2021 GMT
*  issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign CloudSSL CA - SHA256 - G3

That cert is is a stock one that most systems should have. You can use openssl to interpret the results.

As you're adding Cisco_Umbrella_Root_CA.cer you ARE proxying through a corporate proxy. See Cisco Umbrella Root Certificate otherwise there is no need to add that cert. The "tested it on my private PC without any issues" tells you that it's environmental.

You can always run docker run -it nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04 to get a shell on the container and then start running the commands in the Dockerfile by hand. When things break fall back to linux troubleshooting. You're in a ubuntu like environment after all

Timothy c
  • 751
  • 7
  • 8
  • Dear Timothy you are right, but this was not known originally since adding the certificate, actually helped. But as soon as I upgraded pip it stopped working despite of the certificate. – JTIM Oct 22 '20 at 08:03
0

This seems to be either a problem concerning your certificate (old or invalid) or your (probably not updated) pip version. There's a link below regarding a conversation targeting the same (or similar) problem. I hope, I could help...

https://community.onion.io/topic/4014/problem-installing-packages-through-pip3-omega2/3

  • Dear Sven, as mentioned in the question. The certificate is working, but when I upgraded pip it failed. I see no points in the link that are different that what I have stated? – JTIM Oct 22 '20 at 08:00