4

I know it is possible to pass http_proxy and https_proxy environment variables to a container as shown in eg. this SO answer. However, this only works for proxy-aware commands like wget and curl as they merely read and use these environment variables.

I need to connect everything through the proxy, so that all internet access is routed via the proxy. Essentially, the proxy should be transformed into a kind of VPN.

I am thinking about something similar to the --net=container option where the container gets its network from another container.

How do I configure a container to run everything through the proxy?

marlar
  • 3,858
  • 6
  • 37
  • 60
  • Maybe you can change the default route like in the answer to this question: https://stackoverflow.com/questions/36882945/change-default-route-in-docker-container – Hans Kilian Feb 03 '22 at 12:33
  • @HansKilian How would I change the default route to use a proxy like this http://username:password@proxy2.domain.com? Any ideas appreciated :) – marlar Feb 03 '22 at 18:01
  • 1
    https://medium.datadriveninvestor.com/how-to-transparently-use-a-proxy-with-any-application-docker-using-iptables-and-redsocks-b8301ddc4e1e – Jan Garaj Feb 06 '22 at 20:20
  • If I understand correctly you want to put all your containers behind a proxy? If that's the case I would put them all on their own virtual network along with an nginx container that exposes each of your containers on it's own endpoint. – Brandon Piña Feb 11 '22 at 14:28
  • @JanGaraj Your link actually provides valuable info. I think I can get along from there. – marlar Feb 12 '22 at 19:54
  • @BrandonPina No, just a single container. I think the keyword is transparent proxy. – marlar Feb 12 '22 at 19:55

2 Answers2

4

Jan Garaj's comment actually pointed me in the right direction.

As noted in my question, not all programs and commands use the proxy environment variables so simply passing the http_proxy and https_proxy env vars to docker is not a solution. I needed a solution where the whole docker container is directing every network requests (on certain ports) through the proxy. No matter which program or command.

The Medium article demonstrates how to build and setup a docker container that, by the help of redsocks, will redirect all ftp requests to another running docker container acting as a proxy. The communication between the containers is done via a docker network.

In my case I already have a running proxy so I don't need a docker network and a docker proxy. Also, I need to proxy http and https, not ftp.

By changing the configuration files I got it working. In this example I simply call wget ipecho.net/plain to retrieve my outside IP. If it works, this should be the IP of the proxy, not my real IP.

Configuration

Dockerfile:

FROM debian:latest
LABEL maintainer="marlar"
WORKDIR /app
ADD . /app
RUN apt-get update
RUN apt-get upgrade -qy
RUN apt-get install iptables redsocks curl wget lynx -qy
COPY redsocks.conf /etc/redsocks.conf
ENTRYPOINT /bin/bash run.sh

setup script (run.sh):

#!/bin/bash
echo "Configuration:"
echo "PROXY_SERVER=$PROXY_SERVER"
echo "PROXY_PORT=$PROXY_PORT"
echo "Setting config variables"
sed -i "s/vPROXY-SERVER/$PROXY_SERVER/g" /etc/redsocks.conf
sed -i "s/vPROXY-PORT/$PROXY_PORT/g" /etc/redsocks.conf
echo "Restarting redsocks and redirecting traffic via iptables"
/etc/init.d/redsocks restart
iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-port 12345
iptables -t nat -A OUTPUT -p tcp --dport 443 -j REDIRECT --to-port 12345
echo "Getting IP ..."
wget -q -O- https://ipecho.net/plain

redsocks.conf:

base {
 log_debug = off;
 log_info = on;
 log = "file:/var/log/redsocks.log";
 daemon = on;
 user = redsocks;
 group = redsocks;
 redirector = iptables;
}
redsocks {
 local_ip = 127.0.0.1;
 local_port = 12345;
 ip = vPROXY-SERVER;
 port = vPROXY-PORT;
 type = http-connect;

}

Building the container

build -t proxy-via-iptables .

Running the container

docker run -i -t  --privileged -e PROXY_SERVER=x.x.x.x -e PROXY_PORT=xxxx proxy-via-iptables

Replace the proxy server and port with the relevant numbers.

If the container works and uses the external proxy, wget should spit out the IP of the proxy even though the wget command does not use the -e use_proxy=yes option. If it doesn't work, it will give you your own IP. Or perhaps no IP at all, depending on how it fails.

marlar
  • 3,858
  • 6
  • 37
  • 60
  • I've tried following these steps, but I keep getting 502 response errors on every request... My setup seems similar to yours where I'm running a mitmproxy proxy server on my local machine while trying to capture network traffic made from a docker container. – utpamas Jul 18 '22 at 21:32
0

You can use the proxy env var:

docker container run \
  -e HTTP_PROXY=http://username:password@proxy2.domain.com \
  -e HTTPS_PROXY=http://username:password@proxy2.domain.com \
 yourimage

If you want the proxy-server to be automatically used when starting a container, you can configure default proxy-servers in the Docker CLI configuration file (~/.docker/config.json). You can find instructions for this in the networking section in the user guide

for exemple :

{
  "proxies": {
    "default": {
      "httpProxy": "http://username:password@proxy2.domain.com",
      "httpsProxy": "http://username:password@proxy2.domain.com"
    }
  }
}

To verify if the ~/.docker/config.json configuration is working, start a container and print its env:

docker container run --rm busybox env

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=220e4df13604
HTTP_PROXY=http://username:password@proxy2.domain.com
http_proxy=http://username:password@proxy2.domain.com
HTTPS_PROXY=http://username:password@proxy2.domain.com
https_proxy=http://username:password@proxy2.domain.com
HOME=/root
Astronaute
  • 219
  • 3
  • 19
  • Unfortunately, using the proxy environment variables doesn't work as stated in my question. Not all programs use the vars. I need the whole container to redirect everything to the proxy. – marlar Feb 13 '22 at 10:10