I'm having a weird issue with SQL Proxy when I use it as a sidecar with my deployment on my Kubernetes cluster. In summary, it keeps closing the connection for the client then opening a new one right away without causing any fatal exception!
My Deployment
I have a deployment object that has 2 images, (1) Spring Boot App and (2) SQL Cloud Proxy. I use SQL Proxy to access the database from a different GCP project (I have my reasons). All requests to the exposed services from this deployment work fine, but I keep getting errors in the logs stating that connection is being closed from SQL Proxy and established again!
My deployment YAML file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-app
log_forwarding: "true"
spec:
imagePullSecrets:
- name: artifactory-secret
nodeSelector:
apps: run
containers:
- name: db-proxy
image: my-artifactory/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=project:europe-north1:slm-preview=tcp:5432"
- "-credential_file=/secrets/service_account.json"
securityContext:
runAsNonRoot: true
volumeMounts:
- name: sql-proxy-sa-secret
mountPath: /secrets/
readOnly: true
- image: my-artifactory/my-app/app:dev-c3235e9bf3473e61cb3c496e4fb2a69f4f54b07f
imagePullPolicy: Always
name: my-app
securityContext:
runAsNonRoot: true
env:
- name: SPRING_PROFILES_ACTIVE
value: gcp_dev
- name: SPRING_CONFIG_LOCATION
value: file:/config-repo/application.yml,file:/config-repo/core-service.yml
envFrom:
- secretRef:
name: db-sercret
ports:
- containerPort: 8001
protocol: TCP
resources:
limits:
ephemeral-storage: "1Gi"
memory: 1Gi
requests:
ephemeral-storage: "1Gi"
memory: 1Gi
livenessProbe:
failureThreshold: 20
httpGet:
path: /actuator/info
port: 8001
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 2
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8001
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 30
# terminationMessagePath: /dev/termination-log
# terminationMessagePolicy: File
volumeMounts:
- mountPath: /config-repo
name: config-repo-volume
volumes:
- name: sql-proxy-sa-secret
secret:
secretName: sa-sql-user
- configMap:
defaultMode: 420
name: my-app-config
name: config-repo-volume
What I'm looking for?
I'm trying to find a way to solve the issue of restarting the connection thousands of times a day! I did some research around if I can force the proxy not to reset the connection and keep it alive, but I found nothing!
I would appreciate your help guys!