3

I added a macvlan docker network on my Ubuntu instance at GCP. However I cannot access to/from instance. I thought maybe there is a restriction which enforces exactly one MAC per instance.

Anyone knows about this or is there a workaround? Is there a way to see how the arp/mac table looks like on the gateway side at GCP?

My intention was to use a macvlan network for a docker which uses a secondary IP address from the instance primary network.

More details: I assigned a secondary IP range to a VM instance. e.g. VM Internal IP (Primary): 10.10.10.2/24, VM Secondary IP Range: 10.10.11.0/24

GCP routes VM secondary range to the VM IP address. I tested this by creating a test loopback with IP 10.10.11.2 and accessing to this IP from a different VM in the same VPC. It worked.

In the next step I removed that bogus loopback and installed a standalone docker container using a macvlan network with IP address 10.10.11.2.

I was expecting that new container attached to this macvlan network will be reachable through the VM ens4 interface with container`s MAC and IP address(10.10.11.2).

According to documentation this is what macvlan network does. It completely isolates the macvlan network from the host network by using a new MAC address for each container in the macvlan network.

The only difference between an IP from secondary range on the host VM and on the docket container residing in the macvlan network is that the container is using a different MAC than the host VM.

gonuladami
  • 97
  • 1
  • 13

1 Answers1

1

The solution will depend on what kind of functionality you are trying to achieve.

1 - If you require two separately reachable IPs on a single VM, then you will need to recreate the VM with two vNICs (virtual network interface). Adding an additional vNIC can only be done during VM creation and the second vNIC must also be on a different VPC.

2 - If you want to assign an IP (as a secondary subnet range in a single VPC) to a container and have traffic be routable to and from that container, then this is very similar in concept to Kubernetes IP Masquerading. While IP masquerading is typically used in Google Kubernetes Engine (GKE), the ip-masq-agent can perform the same task for your VM’s containers. IP masquerade essentially causes the source IP of the container to be ‘source-NAT’ed’ at the vNIC of the GCP VM. This means that all traffic being routed from your containers on the GCP network will appear to have your GCP VMs vNIC IP as the source IP.

For option 2 the masquerading is a requirement, as by default a VM cannot forward a packet originated by another VM. As a first step, during VM creation IP forwarding must be enabled on the vNIC (can only be enabled during VM creation). Then to enable IP masquerading for your containers, perform steps 7 and 8 from configuring a VM as a NAT gateway.

TheRovinRogue
  • 316
  • 2
  • 9
  • Option 2 is similar to Docker network type "Bridge". However this is not what I need. I want direct access from macvlan network to the physical network via host NIC. In a non-cloud environment this is possible. I am trying to achieve the same in GCP. – gonuladami Jan 06 '20 at 16:44
  • Do your requirements pertain to having the source IP be the macvlan container's IP? – TheRovinRogue Jan 06 '20 at 17:53
  • 1
    Something which may work for you would involve using a GKE cluster. When creating the cluster, it would need to be set as VPC native. WIth VPC native: “[Container] IP addresses are natively routable within the cluster's VPC network and other VPC networks connected to it by VPC Network Peering.” This means that you could preserve the source IP while the traffic remains attached to the VPC. If you also require the container creation to be done by macvlan+Docker, then this may not work. There is currently no other method where a container’s source IP can be preserved and be routable. – TheRovinRogue Jan 06 '20 at 19:09
  • I was reading technical docs of GKE networking but did not find anything about direct routing of container IPs. In GKE design there are Pods which may have multiple containers it. The services in the containers are exposed by the Pod IP using DNAT. However there is no way to access to reach the IP address of an individual container in a Pod. e.g. to access a service offered by a particular container in a Pod, I have to access the PodIP:ServicePort which will forward me to the right container. What I want to do is to access the container behind a Pod on a Node directly with its IP address. – gonuladami Jan 15 '20 at 10:08