10

In my Linux network I am unable to reach my docker containers from the host they are running on, over a dedicated macvlan network. All other connections from to this macvlan network are fine.

So basically the setup is:

DOCKER1       eth0  172.0.0.1 (default)
  |           eth1  10.0.0.1  (macvlan)
  CONTAINER1        10.0.0.11 (macvlan)

DOCKER2       eth0  172.0.0.2 (default)
  |           eth1  10.0.0.2  (macvlan)
  CONTAINER2        10.0.0.12 (macvlan)
  • Host DOCKER1 cannot reach CONTAINER1
  • Host DOCKER2 cannot reach CONTAINER2
  • Host DOCKER1 can reach DOCKER2
  • Host DOCKER1 can reach CONTAINER2
  • Host DOCKER2 can reach DOCKER1
  • Host DOCKER2 can reach CONTAINER1
  • All containers can reach each other
  • All other devices in the physical network can reach all
  • All can reach the gateway/internet

How can I make the host reach itss own containers over the macvlan network?

I need specific applications to interact over this network, so using docker exec won't solve my problem ;).

Patryk
  • 22,602
  • 44
  • 128
  • 244
JCS81
  • 111
  • 1
  • 5

2 Answers2

13

You can do this by doing the following:

ip link add foobar link enp7s0 type macvlan mode bridge
ip addr add 192.168.9.252/32 dev foobar
ip link set foobar up
ip route add 192.168.9.228/32 dev foobar

Where:

enp7s0 - Name of your physical adapter

192.168.9.252/32 - Genuine new IP on your network

192.168.9.228/32 - IP of the container using macvlan

Please be aware that this will not survive reboots, so you will need to script it to run each reboot or use another method to make it persistent

Josh Correia
  • 3,807
  • 3
  • 33
  • 50
Anna Howell
  • 439
  • 5
  • 7
  • Could you elaborate a little more? Does this have to be repeated per docker container? How does this work? Is there any reference material? – Levi Roberts May 18 '20 at 03:12
  • 2
    @LeviRoberts you can do this once but it needs to be done on each host, for example on host DOCKER1: #1 Create new macvlan: `ip link add macvlan2 link eth1 type macvlan mode bridge`. #2 Assign IP within same subnet as other macvlan on macvlan2: `ip addr add 10.0.0.201/32 dev macvlan2`. #3 Enable new macvlan: `ip link set macvlan2 up`. #4 Route macvlan subnet over new macvlan device: `ip route add 10.0.0.0/24 dev macvlan2`. Then all your containers can talk with the host and vise versa. Con: you lose one IP per docker host, pro: you gain connectivity. Ping to 10.0.0.11 will work via 10.0.0.201 – Robbert Mar 28 '21 at 19:16
2

The host cannot communicate with local macvlan devices without special support from an external switch. See e.g. this Red Hat documentation which discusses the use of macvlan devices for virtual machines:

However, when a guest virtual machine is configured to use a type='direct' network interface such as macvtap, despite having the ability to communicate with other guests and other external hosts on the network, the guest cannot communicate with its own host.

This situation is actually not an error — it is the defined behavior of macvtap. Due to the way in which the host's physical Ethernet is attached to the macvtap bridge, traffic into that bridge from the guests that is forwarded to the physical interface cannot be bounced back up to the host's IP stack. Additionally, traffic from the host's IP stack that is sent to the physical interface cannot be bounced back up to the macvtap bridge for forwarding to the guests.

larsks
  • 277,717
  • 41
  • 399
  • 399