I've met a big problem about the network configuration in docker. The senario is this:
1.I have two eth on my serve : eth0 and eth1:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.198.172 netmask 255.255.255.0 broadcast 172.17.198.255
inet6 fe80::250:56ff:fea8:233f prefixlen 64 scopeid 0x20
ether 00:50:56:a8:23:3f txqueuelen 1000 (Ethernet)
RX packets 5415657 bytes 2659904664 (2.4 GiB)
RX errors 0 dropped 78 overruns 0 frame 0
TX packets 935762 bytes 1824232555 (1.6 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.251.6.68 netmask 255.255.255.0 broadcast 10.251.6.255
inet6 fe80::250:56ff:fea8:1778 prefixlen 64 scopeid 0x20
ether 00:50:56:a8:17:78 txqueuelen 1000 (Ethernet)
RX packets 4954017171 bytes 349830337818 (325.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 66380998 bytes 4647495138 (4.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2.The eth0 is the default network interface. And eth1 is an interface which support multicast. It receives and send messages under the multicast mode.
3.Now I create a container using following command: docker run -it -p 8181:8181 centos bash
4.As my understanding, the eth0 and eth1 two interface will all be bridged to veth0 in the container. So my question is:
<1>If the eth0 and eth1 are all bridge to veth0, how the veth0 receive ip data package from two eths(eth0 and eth1) <2>If my understanding is wrong, the docker only bridges eth0 to veth0. How can I bridge the eth1 to veth1 inside container so that the container could receive the multicast data packages from physical network through eth1 to veth1. Thanks so much ! This problem has been stucked me for a long long time. If you have any idea or any question, please leave a comment here. Thanks!