I have two GKE clusters sitting on the same GCP project/ GCP VPC network but sitting in their own respective subnets.
I am attempting to determine why there are issues with connectivity from pods that sit in Cluster A with an Internal LB that sits in Cluster B.
I created a fw rule to allow traffic comming into the vpc network from the aliases ip range auutoassigned by gcp upon creation and the logs say that the connections have been allowed. However: from the pods themselves, spinning up temporary pods from within cluster A through kubectl run -it --rm --restart=Never alpine --image=alpine sh
give connection related errors (wget).
Here are some snippets of the FW logs that are created: Cluster A pod --> Cluster B Internal LB:
connection: {
dest_ip: "10.150.12.41"
dest_port: 15672
protocol: 6
src_ip: "10.32.0.22"
src_port: 46124
}
disposition: "ALLOWED"
connection: {
dest_ip: "10.150.12.41"
dest_port: 5672
protocol: 6
src_ip: "10.32.0.21"
src_port: 35602
}
disposition: "ALLOWED"