2

I am currently test-driving Google Container Engine (GKE) and Kubernetes as a possible replacement to AWS/ElasticBeanstalk deployment. It was my understanding that just by the virtue of my dynamic servers being in the same project as the cloud sql instance, that they'd naturally be included in the firewall rules of that project. However, this appears not to be the case. My app servers and SQL server are in the same availability zone, and I have both ipv4 and ipv6 enabled on the sql server.

I don't want to statically assign IP Addresses to cluster members that are themselves ephemeral, so I'm looking for guidance on how I can properly enable SQL access to my docker-based app hosted inside GKE? As a stopgap, I've added the ephemeral IPs of the container cluster nodes and that has enabled me to use CloudSQL but I'd really like to have a more seamless way of handling this if my nodes somehow get a new ip address.

Peter Grace
  • 569
  • 5
  • 20

2 Answers2

3

The current recommendations (SSL or HAProxy) are discussed in [1]. We are working on a client proxy that will use service accounts to authenticate to Cloud SQL.

[1] Is it possible to connect to Google Cloud SQL from a Google Managed VM?

Community
  • 1
  • 1
Razvan Musaloiu-E.
  • 1,324
  • 8
  • 10
  • Thanks, I will keep an eye out for the client proxy. – Peter Grace Sep 30 '15 at 18:21
  • Some documentation about using the proxy from Kubernetes is now available here: https://github.com/GoogleCloudPlatform/cloudsql-proxy/#to-use-from-kubernetes – dlorenc Mar 14 '16 at 18:41
1

Sadly, this is currently the only way to do this. A better option would be to write a controller that dynamically examined the managed instance group created by GKE and automatically updated the IP addresses in the Cloud SQL API. But I agree the integration should be more seamless.

brendan
  • 4,116
  • 3
  • 15
  • 7
Brendan Burns
  • 724
  • 3
  • 3