I have K8s used by Helm 3.
- I need to access a k8s job while running in yaml file (created by helm).
The kubectl version:
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.6", GitCommit:"d921bc6d1810da51177fbd0ed61dc811c5228097", GitTreeState:"clean", BuildDate:"2021-10-27T17:50:34Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.6", GitCommit:"d921bc6d1810da51177fbd0ed61dc811c5228097", GitTreeState:"clean", BuildDate:"2021-10-27T17:44:26Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
Helm version:
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}
As the following link: DNS concept
It works fine for Pod, but not for job.
As explained, for putting hostname and subdomain in Pod's YAML file, and add service that holds the domain...
- Need to check the state if running.
for pod, it is ready state.
kubectl wait pod/pod-name --for=condition=ready ...
For job there is no ready state (while pod behind is running).
How can I check the state of pod behind the job (job is running) and how can I use host + subdomain for jobs?
My code ... (I removed some security tags, but the same. Important - It may be complicated.
I create a listener - running when listen, with job that need to do some curl command, and this can be achieved whether it has access to that pod behind the job):
Listener (the pod is the last job):
What I added is hostname and subdomain (which work for Pod, and not for Job). If it ever was on Pod - no problem.
I also realized that the name of the Pod (created by the job) has a hash automatic extension.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "my-project.fullname" . }}-listener
namespace: {{ .Release.Namespace }}
labels:
name: {{ include "my-project.fullname" . }}-listener
app: {{ include "my-project.fullname" . }}-listener
component: {{ .Chart.Name }}
subcomponent: {{ .Chart.Name }}-listener
annotations:
"prometheus.io/scrape": {{ .Values.prometheus.scrape | quote }}
"prometheus.io/path": {{ .Values.prometheus.path }}
"prometheus.io/port": {{ .Values.ports.api.container | quote }}
spec:
template: #PodTemplateSpec (Core/V1)
spec: #PodSpec (core/v1)
hostname: {{ include "my-project.fullname" . }}-listener
subdomain: {{ include "my-project.fullname" . }}-listener-dmn
initContainers:
# twice - can add in helers.tpl
- name: wait-mysql-exist-pod
image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_POD_NAME
value: {{ .Release.Name }}-mysql
- name: COMPONENT_NAME
value: {{ .Values.global.mysql.database.name }}
command:
- /bin/sh
args:
- -c
- |-
while [ "$(kubectl get pod $MYSQL_POD_NAME 2>/dev/null | grep $MYSQL_POD_NAME | awk '{print $1;}')" \!= "$MYSQL_POD_NAME" ];do
echo 'Waiting for mysql pod to be existed...';
sleep 5;
done
- name: wait-mysql-ready
image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_POD_NAME
value: {{ .Release.Name }}-mysql
command:
- kubectl
args:
- wait
- pod/$(MYSQL_POD_NAME)
- --for=condition=ready
- --timeout=120s
- name: wait-mysql-has-db
image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
imagePullPolicy: IfNotPresent
env:
{{- include "k8s.db.env" . | nindent 12 }}
- name: MYSQL_POD_NAME
value: {{ .Release.Name }}-mysql
command:
- /bin/sh
args:
- -c
- |-
while [ "$(kubectl exec $MYSQL_POD_NAME -- mysql -uroot -p$MYSQL_ROOT_PASSWORD -e 'show databases' 2>/dev/null | grep $MYSQL_DATABASE | awk '{print $1;}')" \!= "$MYSQL_DATABASE" ]; do
echo 'Waiting for mysql database up...';
sleep 5;
done
containers:
- name: {{ include "my-project.fullname" . }}-listener
image: {{ .Values.global.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default "latest" }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- include "k8s.db.env" . | nindent 12 }}
- name: SCHEDULER_DB
value: $(CONNECTION_STRING)
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: # some args ...
ports:
- name: api
containerPort: 8081
resources:
limits:
cpu: 1
memory: 1024Mi
requests:
cpu: 100m
memory: 50Mi
readinessProbe:
httpGet:
path: /api/scheduler/healthcheck
port: api
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1
livenessProbe:
tcpSocket:
port: api
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: {{ include "my-project.fullname" . }}-volume
mountPath: /etc/test/scheduler.yaml
subPath: scheduler.yaml
readOnly: true
volumes:
- name: {{ include "my-project.fullname" . }}-volume
configMap:
name: {{ include "my-project.fullname" . }}-config
restartPolicy: Never
The service (for the subdomain):
apiVersion: v1
kind: Service
metadata:
name: {{ include "my-project.fullname" . }}-listener-dmn
spec:
selector:
name: {{ include "my-project.fullname" . }}-listener
ports:
- name: api
port: 8081
targetPort: 8081
type: ClusterIP
Roles + RoleBinding (to enable access for curl command):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "my-project.fullname" . }}-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list", "update"]
- apiGroups: [""] # "" indicates the core API group
resources: ["pods/exec"]
verbs: ["create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"]
- apiGroups: ["", "app", "batch"] # "" indicates the core API group
resources: ["jobs"]
verbs: ["get", "watch", "list"]
Role-Binding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "go-scheduler.fullname" . }}-rolebinding
subjects:
- kind: ServiceAccount
name: default
roleRef:
kind: Role
name: {{ include "go-scheduler.fullname" . }}-role
apiGroup: rbac.authorization.k8s.io
And finally a tester that doing a curl command:
(For check I put tail -f
), and enter the pod.
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "my-project.fullname" . }}-test
namespace: {{ .Release.Namespace }}
labels:
name: {{ include "my-project.fullname" . }}-test
app: {{ include "my-project.fullname" . }}-test
annotations:
"prometheus.io/scrape": {{ .Values.prometheus.scrape | quote }}
"prometheus.io/path": {{ .Values.prometheus.path }}
"prometheus.io/port": {{ .Values.ports.api.container | quote }}
spec:
template: #PodTemplateSpec (Core/V1)
spec: #PodSpec (core/v1)
initContainers:
# twice - can add in helers.tpl
#
- name: wait-sched-listener-exists
image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
imagePullPolicy: IfNotPresent
env:
- name: POD_NAME
value: {{ include "my-project.fullname" . }}-listener
command:
- /bin/sh
args:
- -c
- |-
while [ "$(kubectl get job $POD_NAME 2>/dev/null | grep $POD_NAME | awk '{print $1;}')" \!= "$POD_NAME" ];do
echo 'Waiting for scheduler pod to exist ...';
sleep 5;
done
- name: wait-listener-running
image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
imagePullPolicy: IfNotPresent
env:
- name: POD_NAME
value: {{ include "my-project.fullname" . }}-listener
command:
- /bin/sh
args:
- -c
- |-
while [ "$(kubectl get pods 2>/dev/null | grep $POD_NAME | awk '{print $3;}')" \!= "Running" ];do
echo 'Waiting for scheduler pod to run ...';
sleep 5;
done
containers:
- name: {{ include "my-project.fullname" . }}-test
image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /bin/sh
args:
- -c
- "tail -f"
# instead of above can be curl: "curl -H 'Accept: application/json' -X get my-project-listener.my-project-listener-dmn:8081/api/scheduler/jobs"
restartPolicy: Never
I enter the test pod
kubectl exec -it my-tester-<hash> -- /bin/sh
... and run the command:
ping my-project-listener.my-project-listener-dmn
Got:
ping: bad address 'my-project-listener.my-project-listener-dmn'
When doing that for pod:
PING pod-hostname.pod-subdomain (): ... data bytes