14

I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.

I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).

Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.

Workarounds are welcomed!

Meir Tseitlin
  • 1,878
  • 2
  • 17
  • 28
  • 1
    Btw, I hit the same limitation of Internal LB and found no better solution than to expose multi-region services as NodePort services. Even better idea is to proxy such services by Nginx Ingress, which is also NodePort. I know, this is not what you want, but may be it can help. – Vasili Angapov Apr 21 '19 at 04:01
  • This is exactly what I am doing in part of the services - the problem with this approach is that internal node IP addresses are changing periodically (mostly when nodes are upgraded). What I currently did was to create a Cloud DNS entry with node IPs. But I am still thinking about how to automatically change the IPs when nodes are upgraded or changed. Ideas? – Meir Tseitlin Apr 21 '19 at 10:18
  • 1
    What I did is wrote python script which gets all nodes IP addresses using Kube API and creates DNS record in Google Cloud with all those IPs. This script is run every 5 minutes using cronjob. It certainly requires access to Kube using serviceaccount and to Google Cloud using another serviceaccount. – Vasili Angapov Apr 24 '19 at 17:46
  • @VasilyAngapov - good job! Can you post your script somewhere? – Meir Tseitlin Apr 29 '19 at 20:36

3 Answers3

10

In the release notes from GCP, it is stated that:

Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.

Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".

UPDATE: Below service works for GKE v1.16.x & newer versions:

apiVersion: v1
kind: Service
metadata:
  name: ilb-global
  annotations:
    # Required to assign internal IP address
    cloud.google.com/load-balancer-type: "Internal"
    
    # Required to enable global access
    networking.gke.io/internal-load-balancer-allow-global-access: "true"
  labels:
    app: hello
spec:
  type: LoadBalancer
  selector:
    app: hello
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP

For GKE v1.15.x and older versions:

Accessing internal load balancer IP from a VM sitting in a different region will not work. But this helped me to make the internal load balancer global.

As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.

  1. Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:

    # COMMAND:
    kubectl get services/ilb-global
    
    # OUTPUT:
    NAME           TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
    ilb-global     LoadBalancer   10.0.12.12   10.123.4.5    80:32400/TCP   18m
    

    Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:

    # COMMAND:
    kubectl get  service/ilb-global \
      -o jsonpath='{.status.loadBalancer.ingress[].ip}'
    
    # OUTPUT:
    10.123.4.5
    
  2. GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:

    # COMMAND:
    gcloud compute forwarding-rules list | grep 10.123.4.5
    
    # OUTPUT
    NAME                              REGION       IP_ADDRESS      IP_PROTOCOL  TARGET
    a26cmodifiedb3f8252484ed9d0192    asia-south1  10.123.4.5      TCP          asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019
    

    NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.

  3. Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):

    # COMMAND:
    gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
    --region asia-south1 --allow-global-access
    
    # OUTPUT:
    Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].
    

And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).

Amit Yadav
  • 4,422
  • 5
  • 34
  • 79
1

Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.

-2

First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.

Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:

Galo
  • 43
  • 4
  • 1
    This answer is totally unrelated to what was asked. I was not asking to connect from on-prem location - just from another GCP region. I explicitly asked to create Internal Load Balancer (because we don't want any resource to have public IP). – Meir Tseitlin Apr 29 '19 at 20:34
  • I'm sorry, I thought you meant to connect the ILB of your GKE with your on-prem, since your application is running on GKE Nginx Ingress service. Though, the concept is the same. When you use an ILB (in GKE or not) it's a must to use the same VPC and region, with or without a VPN Tunnel. – Galo May 01 '19 at 23:35
  • That's why I was asking for a workaround for this limitation. I.e. I am able to access ILB from AWS via VPN tunnel without a problem. I am also able to access ILB via NAT. Maybe there is an easy workaround to allow something like that between different GCP regions. – Meir Tseitlin May 05 '19 at 00:11
  • Well, another workaround that occurs to me could be: - If they are on the same network but in a different region, you can look into using a proxy VM which can then reach the ILB. Just make sure the VM is in the same zone as the ILB. - If they are in different region and VPC, you will need to use an external entry point. – Galo May 06 '19 at 22:07