4

I'm running Kubernetes in a GKE cluster and need to run a DB migration script on every deploy. For staging this is easy: we have a permanent, separate MySQL service with its own volume. For production however we make use of GCE SQL, resulting in the job having two containers - one more for the migration, and the other for Cloud Proxy.

Because of this new container, the job always shows as 1 active when running kubectl describe jobs/migration and I'm at a complete loss. I have tried re-ordering the containers to see if it checks one by default but that made no difference and I cannot see a way to either a) kill a container or b) check the status of just one container inside the Job.

Any ideas?

Cœur
  • 37,241
  • 25
  • 195
  • 267
J Young
  • 755
  • 1
  • 10
  • 26
  • Hi - you might want to add Deployment/Pod configuration and the kubectl describe outputs to your question to get better answers. – pagid Feb 08 '17 at 18:52
  • Possible duplicate of [Kubernetes: stop CloudSQL-proxy sidecar container in multi container Pod/Job](https://stackoverflow.com/questions/41679364/kubernetes-stop-cloudsql-proxy-sidecar-container-in-multi-container-pod-job) – purplexa Oct 17 '17 at 18:01

4 Answers4

3

I know it's a year too late, but best practice would be to run single cloudsql proxy service for all app's purposes, and then configure DB access in app's image to use this service as a DB hostname.

This way you will not require putting cloudsql proxy container into every pod which uses DB.

  • Never too late to give the right answer. Moving the `cloudsql-proxy` to its down deployment/service solved the problem for me. – Overbryd Mar 22 '18 at 11:04
  • 1
    this is not recommended setup on GKE, as Google Support says. – bartimar Oct 22 '18 at 08:40
  • One of the reasons why you should think twice about this is because traffic between your application and the Cloud SQL proxy is not encrypted, so it's iffy security-wise if you send data over the network. You could, however, set up a service mesh that provides encryption. In that case my guess would be that things should work quite well (assuming you won't run in scalability issues) – FrederikDS Feb 01 '22 at 14:20
1

The reason is the container/process never terminates.

One possible work around is: move the cloud-sql-proxy to it's own deployment - and add a service in front of that. Hence your job won't be responsible for running the long running cloud-sql-proxy and hence will terminate / complete.

Chris Stryczynski
  • 30,145
  • 48
  • 175
  • 286
  • As result I created only service for the proxy and linked it's port to existing deployment container with cloud-sql-proxy that was on the same node. – Verter Mar 30 '23 at 10:28
0

each Pod can be configured with a init container which seems to be a good fit for your issue. So instead of having a Pod with two containers which have to run permanently, you could rather define a init container to do your migration upfront. E.g. like this:

apiVersion: v1
kind: Pod
metadata:
  name: init-container
  annotations:
    pod.beta.kubernetes.io/init-containers: '[
        {
            "name": "migrate",
            "image": "application:version",
            "command": ["migrate up"],
        }
    ]'
spec:
  containers:
  - name: application
    image: application:version
    ports:
    - containerPort: 80
pagid
  • 13,559
  • 11
  • 78
  • 104
  • no, this doesn't address the issue at all. In order for the migration to run, the second container needs to exist. The second container makes the database connection, the first runs the migration code. This fact prevents the init container from being used where otherwise it would make sense. – Ben Sep 20 '19 at 20:42
0

You haven't posted enough details about your specific problem. But I'm taking a guess based on experience.

TL;DR: Move your containers into separate jobs if they are independent.

--

Kubernetes jobs keep restarting till the job succeeds. A kubernetes job will succeed only if every container within succeeds.

This means that your containers should be return in a restart proof way. Once a container sucessfully runs, it should return a success even if it runs again. Otherwise, say container1 is successful, container2 fails. Job restarts. Then, container1 fails (because it has already been successful). Hence, Job keeps restarting.

iamnat
  • 4,056
  • 1
  • 23
  • 36
  • Fair point! I wasn't entirely sure how to fully describe the problem other than that the migration container runs fine, but the Google Cloud Proxy container is long-lived: it never fails or succeeds. The migration relies on the proxy container to communicate with the DB, but it may indeed work stripping them into separate jobs -- they should be able to still communicate on a service level. – J Young Feb 09 '17 at 06:39