4

As a stepping stone to a more complicated Problem, I have been following this example: https://blog.gopheracademy.com/advent-2017/kubernetes-ready-service/, step by step. The next step that I have been trying to learn is using Helm files to deploy the Golang service instead of a makefile.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .ServiceName }}
  labels:
    app: {{ .ServiceName }}
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 50%
      maxSurge: 1
  template:
    metadata:
      labels:
        app: {{ .ServiceName }}
    spec:
      containers:
      - name: {{ .ServiceName }}
        image: docker.io/<my Dockerhub name>/{{ .ServiceName }}:{{ .Release }}
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8000
        readinessProbe:
          httpGet:
            path: /readyz
            port: 8000
        resources:
          limits:
            cpu: 10m
            memory: 30Mi
          requests:
            cpu: 10m
            memory: 30Mi
      terminationGracePeriodSeconds: 30

to a helm deployment.yaml that looks like

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "mychart.fullname" . }}
  labels:
    {{- include "mychart.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "mychart.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        ano 5.4                                        mychart/templates/deployment.yaml
        {{- include "mychart.selectorLabels" . | nindent 8 }}
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "mychart.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8000
              protocol: TCP
          livenessProbe:
                 httpGet:
              path: /healthz
              port: 8000
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8000
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
    {{- end }}

However, when I run the helm chart, the probes (which when not using helm work perfectly fine) fail with errors - specifically when describing the pod, I get the error "Warning Unhealthy 16s (x3 over 24s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503" I have obviously set up the probes wrong on the helm chart. How do I convert these probes from one system to another?

Johnney
  • 161
  • 1
  • 9
  • are you using same image? with same port? – Adiii Oct 12 '22 at 16:34
  • This template looks fine to me. The error means the health check did hit a web server that responded, but it responded with a 503. Check your application's error logs. – jordanm Oct 12 '22 at 17:25
  • Yes, I was using same image and port, and did get rid of the other service that was running on the ports – Johnney Oct 12 '22 at 18:12

1 Answers1

0

Solution: The solution I found was that probes in the Helm Charts were initial time delays. When I replaced

         livenessProbe:
                 httpGet:
              path: /healthz
              port: 8000
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8000

with

livenessProbe:
  httpGet:
    path: /healthz
    port: 8000
  initialDelaySeconds: 15
readinessProbe:
  httpGet:
    path: /health
    port: 8000
  initialDelaySeconds: 15

Because the probes where running before the container was fully started, they were automatically concluding that they failed.

Johnney
  • 161
  • 1
  • 9