3

I have a microservice which I developed and tested using docker-compose. Now I would like to deploy it to kubernetes.

Part of my docker-compose file looks like this:

  tasksdb:
    container_name: tasks-db
    image: mongo:4.4.1
    restart: always
    ports:
      - '6004:27017'
    volumes:
      - ./tasks_service/tasks_db:/data/db
    networks:
      - backend

  tasks-service:
    container_name: tasks-service
    build: ./tasks_service
    restart: always
    ports:
      - "5004:3000"
    volumes:
      - ./tasks_service/logs:/usr/src/app/logs
      - ./tasks_service/tasks_attachments/:/usr/src/app/tasks_attachments
    depends_on:
      - tasksdb
    networks:
      - backend

I used mongoose to connect to the database and it worked fine:

const connection = "mongodb://tasks-db:27017/tasks"; 

const connectDb = () => {
   mongoose.connect(connection, {useNewUrlParser:true, useCreateIndex:true, useFindAndModify: false});
   return mongoose.connect(connection);
  
 };

Utilizing Kompose, I created a deployment file however I had to modify the persistent volume and persistent volume claim accordingly.

I have something like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: tasks-volume
  labels:
    type: local
spec:
  storageClassName: manual
  volumeMode: Filesystem
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.60.50
    path: /tasks_db

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tasksdb-claim0
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

I changed the mongourl as shown here like this:

const connection = "mongodb://tasksdb.default.svc.cluster.local:27017/tasks";

My deployment looks like this:

apiVersion: v1
items:
  - apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
        kompose.version: 1.22.0 (955b78124)
      creationTimestamp: null
      labels:
        io.kompose.service: tasks-service
      name: tasks-service
    spec:
      ports:
        - name: "5004"
          port: 5004
          targetPort: 3000
      selector:
        io.kompose.service: tasks-service

    status:
      loadBalancer: {}
  - apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
        kompose.version: 1.22.0 (955b78124)
      creationTimestamp: null
      labels:
        io.kompose.service: tasksdb
      name: tasksdb
    spec:
      ports:
        - name: "6004"
          port: 6004
          targetPort: 27017
      selector:
        io.kompose.service: tasksdb
    status:
      loadBalancer: {}
  - apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
        kompose.version: 1.22.0 (955b78124)
      creationTimestamp: null
      labels:
        io.kompose.service: tasks-service
      name: tasks-service
    spec:
      replicas: 1
      selector:
        matchLabels:
          io.kompose.service: tasks-service
      strategy:
        type: Recreate
      template:
        metadata:
          annotations:
            kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
            kompose.version: 1.22.0 (955b78124)
          creationTimestamp: null
          labels:
            io.kompose.service: tasks-service
        spec:
          containers:
            - image: 192.168.60.50:5000/blascal_tasks-service
              name: tasks-service
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 3000
          restartPolicy: Always
    status: {}
  - apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
        kompose.version: 1.22.0 (955b78124)
      creationTimestamp: null
      labels:
        io.kompose.service: tasksdb
      name: tasksdb
    spec:
      replicas: 1
      selector:
        matchLabels:
          io.kompose.service: tasksdb
      strategy:
        type: Recreate
      template:
        metadata:
          annotations:
            kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
            kompose.version: 1.22.0 (955b78124)
          creationTimestamp: null
          labels:
            io.kompose.service: tasksdb
        spec:
          containers:
            - image: mongo:4.4.1
              name: tasks-db
              ports:
                - containerPort: 27017 
              resources: {}
              volumeMounts:
                - mountPath: /data/db
                  name: tasksdb-claim0
          restartPolicy: Always
          volumes:
            - name: tasksdb-claim0
              persistentVolumeClaim:
                claimName: tasksdb-claim0
    status: {}

Having several services I added an ingress resource for my routing:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          serviceName: tasks-service
          servicePort: 5004

The deployment seems to run fine as you can see here.

However, I have three issues:

  1. Despite the fact that I can hit my default path which just reads "tasks service is up" I cannot access my mongoose routes like /api/task/raise which connects to the db, it says "..buffering timed out" like this. I guess, the path does not link up to the database service? the tasks service pod gives this error

  2. Whenever there is a power surge and my machine goes off, bringing up the db deployment fails until I delete the config files from the persistent volume, how do I prevent this corruption of files?

  3. I have been researching in an elaborate way of changing the master ip of my cluster as I intend to transfer my cluster to a different network. Any guidance please?

    kubectl logs --namespace=kube-system -l k8s-app=kube-dns

the above gives this : error

Denn
  • 447
  • 1
  • 6
  • 27
  • Have you configure the DNS service (coredns or kube-dns) correctly? – menya Jan 25 '21 at 07:48
  • Helpful container networking guide at https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking. Check the nameserver status with `nslookup tasksdb.default.svc.cluster.local`. – menya Jan 25 '21 at 07:55
  • So nslookup gives the reply ** server can't find tasksdb.default.svc.cluster: NXDOMAIN. How do I configure it? I didn't know I had to. – Denn Jan 25 '21 at 08:01
  • Looks like your k8s DNS Service is broken. Whic DNS service you uses, coredns or kube-dns? – menya Jan 25 '21 at 08:06
  • I am using kube-dns – Denn Jan 25 '21 at 08:07
  • It's hard to point your kube-dns problem without more detail. Follow official guide may be helpful (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#check-the-local-dns-configuration-first). – menya Jan 25 '21 at 08:13
  • I have attached a screenshot of the logs – Denn Jan 25 '21 at 08:43
  • I decided to reboot .. the logs have cleared. I dont know .. but nslookup tasksdb.default.svc.cluster.local still says ** server can't find tasksdb.default.svc.cluster: NXDOMAIN – Denn Jan 25 '21 at 09:05
  • Maybe you forget to deploy some yaml files(named `***clusterrole***.yaml` or `***rbac***.yaml`). – menya Jan 25 '21 at 10:37
  • I have reset kubeadm and restarted a couple of times but not getting any breakthrough. The logs have cleared but no results. kindly advice – Denn Jan 25 '21 at 15:39
  • Is there like a step by step tutorial on installing a dns resource? I am on Ubuntu 20.04. – Denn Jan 26 '21 at 08:34
  • 1
    Hi @Denn. You can check out one of my answers [here](https://stackoverflow.com/a/65810701/11560878). It shows how to debug and (re)install DNS for k8s. – Wytrzymały Wiktor Jan 26 '21 at 09:26
  • @WytrzymałyWiktor well I don't know what exactly I did wrong the first time.. but after a fresh install Its now working. Thank you. Now to the two issues remaining, I was thinking maybe my mongo data PV is corrupt because I set the access mode to readwritemany .. what do you think? Also about the transferring to another network, since I am using ingress, I was thinking of setting it to loadbalancer and assigning it an ip from the new network. I'll post answer if it goes fine. Thank you – Denn Jan 28 '21 at 10:25
  • @WytrzymałyWiktor So I am running my cluster on ubuntu anf it keeps crushing.. I have to install afresh every time. I am aware their is a known issue with ubuntu (resolv.conf file), I am finding it hard to understand how to solve it could you kindly advice me on how to use the right resolve.conf file? – Denn Mar 17 '21 at 14:56

2 Answers2

0

Your tasksdb Service exposes port 6004, not 27017. Try using the following URL:

const connection = "mongodb://tasksdb.default.svc.cluster.local:6004/tasks";

Changing your network depends on what networking CNI plugin you are using. Every plugin has different steps . For Calico please see https://docs.projectcalico.org/networking/migrate-pools

Vasili Angapov
  • 8,061
  • 15
  • 31
  • So I have tried severally with port 6004 with no breakthrough. Same error of "..buffering timed out" . – Denn Jan 24 '21 at 15:34
  • I would also recommend running MongoDB as a StatefulSet https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ – hdhruna Jan 24 '21 at 15:36
  • I have updated the error. Let me try implementing mongo deployment as a statefulset. – Denn Jan 24 '21 at 16:17
  • it can also be application code issue. Can you check your connection with mongo client from CLI? – Vasili Angapov Jan 24 '21 at 16:34
  • Well I can access the mongodb although it does not have the tasks db which should be created. I don't know what the problem could be – Denn Jan 24 '21 at 17:21
  • I have added the error from the tasks-service pod. – Denn Jan 25 '21 at 06:28
0

I believe this is your cluster ip setting for mongodb instance:

apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kompose.cmd: kompose convert -f docker-compose.yml -o k8manifest.yml
        kompose.version: 1.22.0 (955b78124)
      creationTimestamp: null
      labels:
        io.kompose.service: tasksdb
      name: tasksdb
    spec:
      ports:
        - name: "6004"
          port: 6004
          targetPort: 27017
      selector:
        io.kompose.service: tasksdb
    status:
      loadBalancer: {}

When you create an instance of mongodb inside kubernetes, it runs inside a pod. To connect to a pod, we have to go through the cluster IP service. Anytime we are trying to connect to a cluster IP service we are going to write the name of that cluster iP service for the domain of connection url. in this case you connection url must be

mongodb://tasksdb:6004/nameOfDatabase

Yilmaz
  • 35,338
  • 10
  • 157
  • 202