0

I have two GKE clusters sitting on the same GCP project/ GCP VPC network but sitting in their own respective subnets.

I am attempting to determine why there are issues with connectivity from pods that sit in Cluster A with an Internal LB that sits in Cluster B.

I created a fw rule to allow traffic comming into the vpc network from the aliases ip range auutoassigned by gcp upon creation and the logs say that the connections have been allowed. However: from the pods themselves, spinning up temporary pods from within cluster A through kubectl run -it --rm --restart=Never alpine --image=alpine sh give connection related errors (wget).

Here are some snippets of the FW logs that are created: Cluster A pod --> Cluster B Internal LB:

connection: {
   dest_ip: "10.150.12.41"    
   dest_port: 15672    
   protocol: 6    
   src_ip: "10.32.0.22"    
   src_port: 46124    
  }
  disposition: "ALLOWED" 
connection: {
   dest_ip: "10.150.12.41"    
   dest_port: 5672    
   protocol: 6    
   src_ip: "10.32.0.21"    
   src_port: 35602    
  }
  disposition: "ALLOWED"   
Anew Koo
  • 1
  • 1
  • 1
    If two cluster are in different region, then you might want to try this: https://stackoverflow.com/questions/55777939/accessing-gcp-internal-load-balancer-from-another-region/59658742#59658742 – RammusXu Aug 07 '20 at 07:43
  • @Anew Koo, did you manage to resolve your issue ? – mario Aug 13 '20 at 08:34
  • @mario I haven't yet, an updated question can be found here: https://stackoverflow.com/questions/63384893/gke-pod-to-another-cluster-internal-loadbalancer-communication-via-tcp – Anew Koo Aug 13 '20 at 13:45

0 Answers0