6

The following link https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands

$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2
$ gcloud container clusters update mservice-dev-cluster \
>     --region europe-west2 \
>     --enable-master-authorized-networks \
>     --master-authorized-networks "35.241.216.229/32"
Updating mservice-dev-cluster...done.
ERROR: (gcloud.container.clusters.update) Operation [<Operation
clusterConditions: []
detail: u'Patch failed'

$ gcloud container clusters update mservice-dev-cluster \
>     --region europe-west2 \
>     --enable-master-authorized-networks \
>     --master-authorized-networks "172.17.0.2/32"
Updating mservice-dev-cluster...done.
Updated [https://container.googleapis.com/v1/projects/protean- 
XXXX/zones/europe-west2/clusters/mservice-dev-cluster].
To inspect the contents of your cluster, go to: 
https://console.cloud.google.com/kubernetes/workload_/gcloud/europe- 
west2/mservice-dev-cluster?project=protean-XXXX

$ kubectl config current-context
gke_protean-XXXX_europe-west2_mservice-dev-cluster

$ kubectl get services
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout

When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.

2 Answers2

2

Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there: https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies

NeonBoxx
  • 143
  • 5
-5

Try to perform the following

gcloud container clusters get-credentials [CLUSTER_NAME]

And confirm that kubectl is using the right credentials:

gcloud auth application-default login
Luke
  • 75
  • 7
  • I have the exact same issue happening and its nothing todo with being authenticated or expired tokens or credentials – Madu Alikor Mar 19 '19 at 13:21
  • I've noticed that you have this value configured as true `enable_private_endpoint = true` I suggest to change it to false, then, you should be able to access with the public IP of the Cloud Shell. – Adrian nieto macias Mar 26 '19 at 11:39
  • I need it private hence reason private endpoint has been enabled. – R Thottuvaikkatumana Mar 28 '19 at 14:39
  • 1
    Thanks for the clarification, since you need the private endpoint enabled you will only be able to run kubectl commands from machines which are in [same VPC](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_master) than the private GKE cluster. You are not able to access to your cluster because the Cloud Shell is not part of your project VPC. – Adrian nieto macias Mar 29 '19 at 12:46