1

I got a new Windows 10 machine where I installed all my dev tools including Docker desktop. I noticed that the containers I spun up couldn't access any HTTPS address. A .NET 6-based application that I ran using docker-compose on my old machine without any issues, threw this error on the new one:

error NU1301: Unable to load the service index for source https://api.nuget.org/v3/index.json

To verify this, I ran an Ubuntu container and shelled into it:

docker run -it ubuntu /bin/bash

then I installed curl:

apt update
apt install curl

then I tried curl with the nuget address:

curl https://api.nuget.org/v3/index.json

which failed with the following error message:

curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.

I ran into similar issues with other applications I tried to run (applications that needed to access some HTTPS addresses) and made sure this wasn't about a specific address. Docker containers seem to fail to access any HTTPS address but have no issues connecting to HTTP ones. I'm not sure what setting enabled access to HTTPS addresses on my old machine. I installed Docker Desktop the same way I did before and didn't tweak any settings or create any bridge network. No proxies or anti-viruses on either machine. Any idea what might have caused this?

Mar Chal
  • 133
  • 3
  • 15

2 Answers2

0

You need to install SSL certificates in the Ubuntu container.

apt-get update
apt-get install ca-certificates
Louay GOURRIDA
  • 190
  • 1
  • 3
  • 14
  • The problem is not an individual container. I didn't have to install anything to enable a container to access an HTTPS address before and I shouldn't have to do that now for every single container I spin up. It just doesn't make sense. – Mar Chal Dec 29 '22 at 11:15
  • Create your own image, with the certs installed, and spin that up instead of the base image. Solving issues like this is what docker is for. – Software Engineer Dec 29 '22 at 11:55
  • I don't know man. I remember I had this problem and solved it that way, I am not sure how to do that for every container ```https://stackoverflow.com/questions/58338266/curl-certificate-fail-in-docker-container``` check this it might help – Louay GOURRIDA Dec 29 '22 at 12:21
  • @SoftwareEngineer I would do that if there was a cert problem with a specific container/image. As I said, on my old machine with almost the same specs and tools, I didn't have to do anything to enable this while on the new one, **any** container I spin up would fail to access HTTPS addresses. It just doesn't make sense that from now on I have to create an image for things that used to work before without any issues. I was hoping there was some sort of Docker settings I could tweak to fix this issue. – Mar Chal Dec 30 '22 at 04:10
  • Could it simply be that the image you're using is old (wrt ca-certs) compared to the server you're accessing? And, is it really 'any' containers, or just ones based on the same image? – Software Engineer Dec 30 '22 at 05:09
  • No I tried a couple of different repos that used to work perfectly on my old machine. They all happen to need to access some HTTPS addresses and they are all failing now. I was initially wondering what they had in common then I narrowed it down to this HTTPS issue. Some of them try to access the nuget address I shared, some others try to fetch something for Elasticsearch from a different HTTPS address and so on. – Mar Chal Dec 30 '22 at 06:35
0

I ended up downgrading Docker desktop from 4.15.0 to 4.9.1 and now everything works.

Mar Chal
  • 133
  • 3
  • 15