88

I am running a virtual machine (Ubuntu 13.10) with docker (version 0.8.1, build a1598d1). I am trying to build an image with a dockerfile. First, I want to update the packages (using the code below - the proxy is obfuscated) but apt-get times out with the error: Could not resolve 'archive.ubuntu.com'.

FROM ubuntu:13.10
ENV HTTP_PROXY <HTTP_PROXY>
ENV HTTPS_PROXY <HTTPS_PROXY>
RUN export http_proxy=$HTTP_PROXY
RUN export https_proxy=$HTTPS_PROXY
RUN apt-get update && apt-get upgrade

I have also run the following in the host system:

sudo HTTP_PROXY=http://<PROXY_DETAILS>/ docker -d &

The host is able to run apt-get without issue.

How can I change the dockerfile to allow it to reach the ubuntu servers from within the container?

Update

I ran the code in CentOS (changing the FROM ubuntu:13.10 to FROM centos) and it worked fine. It seems to be a problem with Ubuntu.

Christopher Louden
  • 7,540
  • 2
  • 26
  • 29
  • I have just test the http proxy in centos (yum update) and in the ubuntu:13.10 (apt-get update). Both images works for me, I even tried to remove dns settings from the container to test the proxy (it works ok without dns as it should). I am using http_proxy only (no https). – Jiri Mar 05 '14 at 06:54
  • Do you actually have "" in your Dockerfile or is this a placeholder? – Behe Jul 09 '14 at 16:57
  • I have the actual proxy in the file. I'm just not able to share the address. – Christopher Louden Jul 10 '14 at 13:49
  • See also [**How to build Docker Images with Dockerfile behind HTTP_PROXY?**](http://stackoverflow.com/a/31987595/6309) – VonC Aug 13 '15 at 11:59
  • Have a look at this article : [Using Docker Behind a Proxy](https://blog.codeship.com/using-docker-behind-a-proxy/) – Guillaume Husta Sep 13 '17 at 09:11

12 Answers12

117

UPDATE:

You have wrong capitalization of environment variables in ENV. Correct one is http_proxy. Your example should be:

FROM ubuntu:13.10
ENV http_proxy <HTTP_PROXY>
ENV https_proxy <HTTPS_PROXY>
RUN apt-get update && apt-get upgrade

or

FROM centos
ENV http_proxy <HTTP_PROXY>
ENV https_proxy <HTTPS_PROXY>
RUN yum update 

All variables specified in ENV are prepended to every RUN command. Every RUN command is executed in own container/environment, so it does not inherit variables from previous RUN commands!

Note: There is no need to call docker daemon with proxy for this to work, although if you want to pull images etc. you need to set the proxy for docker deamon too. You can set proxy for daemon in /etc/default/docker in Ubuntu (it does not affect containers setting).


Also, this can happen in case you run your proxy on host (i.e. localhost, 127.0.0.1). Localhost on host differ from localhost in container. In such case, you need to use another IP (like 172.17.42.1) to bind your proxy to or if you bind to 0.0.0.0, you can use 172.17.42.1 instead of 127.0.0.1 for connection from container during docker build.

You can also look for an example here: How to rebuild dockerfile quick by using cache?

Jiri
  • 16,425
  • 6
  • 52
  • 68
  • I don't have a proxy on the host. It is a corporate proxy that I am behind. The same code works on CentOS (changing the `FROM` only) with the same proxy settings. – Christopher Louden Mar 04 '14 at 23:59
  • 3
    With the current state of Docker it looks like you must have `ENV http_proxy corporate-proxy.com` in your Dockerfile. That's pretty disgusting as it means you can't share your Dockerfile with anyone outside your company. I've just verified this by running polipo in a container while trying to cache `apt-get install` commands for developing Dockerfiles. Perhaps a future version of Docker will have some magic iptables configuration that allows transparent proxying of HTTP requests? – Tim Potter Mar 16 '14 at 10:17
  • @TimPotter I think it is not that bad (no advanced magic nescessary). You need to set up polipo at host with parent proxy (`polipo parentProxy=corporate-proxy.com`) and than you need to set up transparent proxy using iptables: `iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner --dport 80 -j REDIRECT --to-port 8123`. – Jiri Mar 16 '14 at 12:36
  • 1
    I've just put together a project that provides squid running in a container with appropriate iptables / ip rules commands to redirect all other container traffic through the squid container for transparent proxying. This may be useful for what you're describing? http://silarsis.blogspot.com.au/2014/03/proxy-all-containers.html – KevinL Mar 21 '14 at 20:52
  • do you have a way to get http_proxy setting in docker daemon (/etc/defailt/docker) from docker container ? so I can add script to inject into Dockerfile – Larry Cai Jul 03 '14 at 00:45
  • @LarryCai I do not know how to get /etc/default/docker setting, but in case a corporate proxy is configured with WPAD+PAC (web proxy auto discovery) you can use script like [pacparser](https://code.google.com/p/pacparser/) to discover proxy from shell script and set apropriate variables. – Jiri Jul 03 '14 at 06:11
  • @Jiri is there a way to unset ENV variables? I need to set a proxy to download & install wget but need to remove the proxy when I use wget to download a jar from our local artifactory repo – Adrian Jul 22 '15 at 20:55
58

Updated on 02/10/2018

With new feature in docker option --config, you needn't set Proxy in Dockerfile any more. You can have same Dockerfile to be used in and out corporate environment with or without proxy setup.

command docker run option:

--config string      Location of client config files (default "~/.docker")

or environment variable DOCKER_CONFIG

`DOCKER_CONFIG` The location of your client configuration files.

$ export DOCKER_CONFIG=~/.docker

https://docs.docker.com/engine/reference/commandline/cli/

https://docs.docker.com/network/proxy/

I recommend to set proxy with httpProxy, httpsProxy, ftpProxy and noProxy (The official document misses the variable ftpProxy which is useful sometimes)

{
 "proxies":
 {
   "default":
   {
     "httpProxy": "http://192.168.1.12:3128",
     "httpsProxy": "http://192.168.1.12:3128",
     "ftpProxy": "http://192.168.1.12:3128",
     "noProxy": "*.test.example.com,.example2.com,127.0.0.0/8"
   }
 }
}

Adjust proxy IP and port for your proxy environment and save to ~/.docker/config.json

After you set properly with it, you can run docker build and docker run as normal.

$ cat Dockerfile

FROM alpine

$ docker build -t demo . 
    
$ docker run -ti --rm demo env|grep -ri proxy
(standard input):HTTP_PROXY=http://192.168.1.12:3128
(standard input):http_proxy=http://192.168.1.12:3128
(standard input):HTTPS_PROXY=http://192.168.1.12:3128
(standard input):https_proxy=http://192.168.1.12:3128
(standard input):NO_PROXY=*.test.example.com,.example2.com,127.0.0.0/8
(standard input):no_proxy=*.test.example.com,.example2.com,127.0.0.0/8
(standard input):FTP_PROXY=http://192.168.1.12:3128
(standard input):ftp_proxy=http://192.168.1.12:3128

Old answer (Decommissioned)

Below setting in Dockerfile works for me. I tested in CoreOS, Vagrant and boot2docker. Suppose the proxy port is 3128

###In Centos:

ENV http_proxy=ip:3128 
ENV https_proxy=ip:3128

###In Ubuntu: ENV http_proxy 'http://ip:3128' ENV https_proxy 'http://ip:3128'

Be careful of the format, some have http in it, some haven't, some with single quota. if the IP address is 192.168.0.193, then the setting will be:

###In Centos:

ENV http_proxy=192.168.0.193:3128 
ENV https_proxy=192.168.0.193:3128

###In Ubuntu: ENV http_proxy 'http://192.168.0.193:3128' ENV https_proxy 'http://192.168.0.193:3128'

###If you need set proxy in coreos, for example to pull the image

cat /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTP_PROXY=http://192.168.0.193:3128"
BMW
  • 42,880
  • 12
  • 99
  • 116
  • 1
    You may also need to add https proxy setting, as apt-get will use secure connections. [Service] Environment="HTTP_PROXY=http://your-proxy-server:port/" HTTPS_PROXY=https://your-proxy-server:port:3128/" "NO_PROXY=localhost,127.0.0.0/8" – pharsfalvi Jun 30 '15 at 09:44
  • Note that for Centos, you explicitly have to add the port number, even if it's default port `80`: https://stackoverflow.com/a/46949277/1654763 – Munchkin Oct 26 '17 at 08:31
  • 1
    This worked. Thanks for the example. Docker docs isn't clear about it. – ToTenMilan Jan 03 '18 at 11:37
  • 2
    `$DOCKER_CONFIG` should point to a directory and not directly to the `config.json` file – Jakub Bochenski Jan 08 '19 at 14:40
45

You can use the --build-arg option when you want to build using a Dockerfile.

From a link on https://github.com/docker/docker/issues/14634 , see the section "Build with --build-arg with multiple HTTP_PROXY":

[root@pppdc9prda2y java]# docker build 
  --build-arg https_proxy=$HTTP_PROXY --build-arg http_proxy=$HTTP_PROXY 
  --build-arg HTTP_PROXY=$HTTP_PROXY --build-arg HTTPS_PROXY=$HTTP_PROXY 
  --build-arg NO_PROXY=$NO_PROXY  --build-arg no_proxy=$NO_PROXY -t java .

NOTE: On your own system, make sure you have set the HTTP_PROXY and NO_PROXY environment variables.

zhanxw
  • 3,159
  • 3
  • 34
  • 32
  • This is nasty workaround, for persistent setup you should use Dockerfile, like @Reza Farshi gave example – Paweł Smołka Aug 03 '16 at 11:31
  • 20
    Nasty workaround? In some (most) cases it's preferable to not have your own proxy baked into the image, if your image is (for example) to be used by people in other parts of your company, who might be behind a different proxy. – ventolin May 24 '17 at 12:46
13

before any apt-get command in your Dockerfile you should put this line

COPY apt.conf /etc/apt/apt.conf

Dont'f forget to create apt.conf in the same folder that you have the Dockerfile, the content of the apt.conf file should be like this:

Acquire::socks::proxy "socks://YOUR-PROXY-IP:PORT/";
Acquire::http::proxy "http://YOUR-PROXY-IP:PORT/";
Acquire::https::proxy "http://YOUR-PROXY-IP:PORT/";

if you use username and password to connect to your proxy then the apt.conf should be like as below:

Acquire::socks::proxy "socks://USERNAME:PASSWORD@YOUR-PROXY-IP:PORT/";
Acquire::http::proxy "http://USERNAME:PASSWORD@YOUR-PROXY-IP:PORT/";
Acquire::https::proxy "http://USERNAME:PASSWORD@YOUR-PROXY-IP:PORT/";

for example :

Acquire::https::proxy "http://foo:bar@127.0.0.1:8080/";

Where the foo is the username and bar is the password.

Reza Farshi
  • 963
  • 7
  • 9
  • apt does not support SOCKS proxies at all. `Acquire::socks::proxy` means set the proxy for all URLs starting with a `socks` scheme. Since your `sources.list` does not have any `socks://` URLs, that line is entirely ignored. I'll submit an edit to correct this. – Hans-Christoph Steiner Dec 16 '15 at 09:11
  • You have `/etc/apt/apt.conf.d/` on `jessie`, so the Dockerfile `COPY` directive needs an update here. – Ain Tohvri Dec 09 '17 at 22:11
  • This was the only answer that worked for me (running Docker for Windows behind corporate proxy). – Connor Goddard Apr 25 '18 at 10:08
  • I wonder why `--build-arg http_proxy=http://` does not work in this case and I must modify /etc/apt/apt.conf – Jackiexiao Sep 01 '22 at 03:35
8

Use --build-arg in lower case environment variable:

docker build --build-arg http_proxy=http://proxy:port/ --build-arg https_proxy=http://proxy:port/ --build-arg ftp_proxy=http://proxy:port --build-arg no_proxy=localhost,127.0.0.1,company.com -q=false .
Gaetan
  • 488
  • 4
  • 13
6

A slight alternative to the answer provided by @Reza Farshi (which works better in my case) is to write the proxy settings out to /etc/apt/apt.conf using echo via the Dockerfile e.g.:

FROM ubuntu:16.04

RUN echo "Acquire::http::proxy \"$HTTP_PROXY\";\nAcquire::https::proxy \"$HTTPS_PROXY\";" > /etc/apt/apt.conf

# Test that we can now retrieve packages via 'apt-get'
RUN apt-get update

The advantage of this approach is that the proxy addresses can be passed in dynamically at image build time, rather than having to copy the settings file over from the host.

e.g.

docker build --build-arg HTTP_PROXY=http://<host>:<port> --build-arg HTTPS_PROXY=http://<host>:<port> .

as per docker build docs.

Connor Goddard
  • 635
  • 7
  • 14
5

As suggested by other answers, --build-arg may be the solution. In my case, I had to add --network=host in addition to the --build-arg options.

docker build -t <TARGET> --build-arg http_proxy=http://<IP:PORT> --build-arg https_proxy=http://<IP:PORT> --network=host .
user4780495
  • 2,642
  • 2
  • 18
  • 24
  • If it only work when using --network=host, if could be an issue with dnsmasq. See wisbucky's answer bellow. After getting the proper dns in /etc/resolv,conf on the host, you shouldn't need --network=host – user2707671 Mar 14 '18 at 17:40
3

i had the same problem and found another little workaround: i have a provisioner script that is added form the docker build environment. In the script i set the environment variable dependent on a ping check:

Dockerfile:

ADD script.sh /tmp/script.sh
RUN /tmp/script.sh

script.sh:

if ping -c 1 ix.de ; then
    echo "direct internet doing nothing"
else
    echo "proxy environment detected setting proxy"
    export http_proxy=<proxy address>
fi

this is still somewhat crude but worked for me

  • This worked for you? I can get the script to run just fine, but the environment variables are only available to the script. Things like apt-get, cURL, and wget don't end up seeing the environment variables. – Ryan J. McDonough Mar 29 '15 at 12:52
3

If you have the proxies set up correctly, and still cannot reach the internet, it could be the DNS resolution. Check /etc/resolve.conf on the host Ubuntu VM. If it contains nameserver 127.0.1.1, that is wrong.

Run these commands on the host Ubuntu VM to fix it:

sudo vi /etc/NetworkManager/NetworkManager.conf
# Comment out the line `dns=dnsmasq` with a `#`

# restart the network manager service
sudo systemctl restart network-manager

cat /etc/resolv.conf

Now /etc/resolv.conf should have a valid value for nameserver, which will be copied by the docker containers.

wisbucky
  • 33,218
  • 10
  • 150
  • 101
  • Thank you! That answer was extremely helpful and finally helped me getting `pip install` working with docker behind a proxy. That's not really what the OP wanted, but helped me a lot! :) – colidyre Aug 29 '17 at 20:25
  • Same here, thank you! Here is more info about this issue: https://superuser.com/questions/681993/using-dnsmasq-with-networkmanager – user2707671 Mar 14 '18 at 17:37
1

We are doing ...

ENV http_proxy http://9.9.9.9:9999
ENV https_proxy http://9.9.9.9:9999

and at end of dockerfile ...

ENV http_proxy ""
ENV https_proxy ""

This, for now (until docker introduces build env vars), allows the proxy env vars to be used for the build ONLY without exposing them

The alternative to solution is NOT to build your images locally behind a proxy but to let docker build your images for you using docker "automated builds". Since docker is not building the images behind your proxy the problem is solved. An example of an automated build is available at ...

https://github.com/danday74/docker-nginx-lua (GITHUB repo)

https://registry.hub.docker.com/u/danday74/nginx-lua (DOCKER repo which is watching the github repo using an automated build and doing a docker build on a push to the github master branch)

danday74
  • 52,471
  • 49
  • 232
  • 283
  • Just wanted to mention that since these ENV vars are exposed in EACH intermediate container, unsetting them at the end will not hide them, and they can still be accessed pretty easily. From what I've seen, the secure options for dealing with sensitive data are covered in this [github issue](https://github.com/docker/docker/issues/13490) – Squadrons Dec 17 '16 at 19:51
1

and If you want to set proxy for wget you should put these line in your Dockerfile

ENV http_proxy YOUR-PROXY-IP:PORT/
ENV https_proxy YOUR-PROXY-IP:PORT/
ENV all_proxy YOUR-PROXY-IP:PORT/
Reza Farshi
  • 963
  • 7
  • 9
1

As Tim Potter pointed out, setting proxy in dockerfile is horrible. When building the image, you add proxy for your corporate network but you may be deploying in cloud or a DMZ where there is no need for proxy or the proxy server is different.

Also, you cannot share your image with others outside your corporate n/w.

Dhawal
  • 1,240
  • 2
  • 12
  • 20