2

I create an nodejs app and try to connect it with mongodb on kubernetes cluster. The nodejs and mongodb app are separate pods in my cluster.

mongodb and app are running when i display the status , i can connect me to the mongodb pods and add datas

NAME                                      READY   STATUS    RESTARTS   AGE
my-backend-core-test-5d7b78c9dc-dt4bg     1/1     Running   0          31m
my-frontend-test-6868f7c7dd-b2qtm         1/1     Running   0          40h
my-mongodb-test-7d55dbff74-2m6cm          1/1     Running   0          34m

But when i try to make the connection with this script:

const urlDB = "my-mongodb-service-test.my-test.svc.cluster.local:27017";
console.log("urlDB :: ", urlDB);

mongoose.connect('mongodb://'+urlDB+'/test', { useNewUrlParser: true }).then(() => {
    console.log("DB connected")
}).catch((err)=> {
    console.log("ERROR")
})

I have the following error on my nodejs app:

> my-core@1.0.0 start /usr/src/app
> node ./src/app.js

urlDB ::  my-mongodb-service-test.my-test.svc.cluster.local:27017
ERROR

As explained on kubernetes i'm suppose to communicate between the differents pods using service-name.namespace.svc.cluster.local (my-mongodb-service-test.my-test.svc.cluster.local:27017)

mongo logs show me a different host, corresponding to my pod and not the service. How can i configure the host on my yaml file ?

mongodb logs :


2019-05-24T10:57:02.367+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-05-24T10:57:02.374+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=my-mongodb-test-7d55dbff74-2m6cm
2019-05-24T10:57:02.374+0000 I CONTROL  [initandlisten] db version v4.0.9
2019-05-24T10:57:02.374+0000 I CONTROL  [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765
2019-05-24T10:57:02.374+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten] modules: none
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten] build environment:
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-05-24T10:57:02.375+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0" } }
2019-05-24T10:57:02.376+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-05-24T10:57:02.377+0000 I STORAGE  [initandlisten]
2019-05-24T10:57:02.377+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-05-24T10:57:02.377+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-05-24T10:57:02.377+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=485M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-05-24T10:57:03.521+0000 I STORAGE  [initandlisten] WiredTiger message [1558695423:521941][1:0x7f2d2eeb0a80], txn-recover: Main recovery loop: starting at 2/140416 to 3/256
2019-05-24T10:57:03.719+0000 I STORAGE  [initandlisten] WiredTiger message [1558695423:719280][1:0x7f2d2eeb0a80], txn-recover: Recovering log 2 through 3
2019-05-24T10:57:03.836+0000 I STORAGE  [initandlisten] WiredTiger message [1558695423:836203][1:0x7f2d2eeb0a80], txn-recover: Recovering log 3 through 3
2019-05-24T10:57:03.896+0000 I STORAGE  [initandlisten] WiredTiger message [1558695423:896185][1:0x7f2d2eeb0a80], txn-recover: Set global recovery timestamp: 0
2019-05-24T10:57:03.924+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten]
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten]
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten]
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-05-24T10:57:03.947+0000 I CONTROL  [initandlisten]
2019-05-24T10:57:03.984+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-05-24T10:57:03.986+0000 I NETWORK  [initandlisten] waiting for connections on port 27017

mongodb yaml

apiVersion: v1
kind: Service
metadata:
  name: my-mongodb-service-test
  namespace: my-test 
spec:
  selector:
    app: my-mongodb
    env: test
  ports:
  - port: 27017
    targetPort: 27017 
    protocol: TCP


--- 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-mongodb-test
  namespace: my-test    
  labels:
    app: my-mongodb
    env: test
spec: 
  selector:
    matchLabels:
      app: my-mongodb-test
  replicas: 1
  template:  
    metadata:
      labels:
        app: my-mongodb-test
    spec:
      containers: 
        - name: mongo
          image: mongo:4.0.9
          command:
            - mongod
            - "--bind_ip"
            - "0.0.0.0"
          imagePullPolicy: Always
          ports:
            - containerPort: 27017
              name: mongo
              hostPort: 27017
              protocol: TCP
          volumeMounts:
              - mountPath: /data/db
                name: mongodb-volume
      volumes:
        - name: mongodb-volume
          hostPath:
            path: /home/debian/mongodb 
Lionel Piroche
  • 252
  • 1
  • 5
  • 18
  • have you tried using this as the URL = "my-mongodb-service-test:27017"; – Anshul Jindal May 24 '19 at 13:08
  • Also add label in the metadata of the mongo service : labels: app: my-mongodb-service-test – Anshul Jindal May 24 '19 at 13:10
  • could you please print the stack trace instead of "ERROR" ? It would give more idea about the issue. – hariK May 24 '19 at 14:21
  • No my-mongodb-service-test:27017 don't works, i try many solutions before to post. 2 possibilities to my problem the configuration of mongodb prevent the connection due to the wrong hostname see the log. how to configure on kubernetes i don't know. Second, my pod mongo is not accessible from the nodejs pod, how to check that i don't know i just start on kubernetes – Lionel Piroche May 25 '19 at 19:54
  • About the stack trace it's just err => true :( weird – Lionel Piroche May 25 '19 at 20:09

1 Answers1

1

Your service selector is mismatch with pod labels, service endpoints is empty (you can check this by kubectl describe svc/my-mongodb-service-test -n my-test), so kubernetes can not access pod by service.

Correct service selector is:

apiVersion: v1
kind: Service
metadata:
  name: my-mongodb-service-test
  namespace: my-test 
spec:
  selector:
    app: my-mongodb
...

This should match pod labels specify by spec.template.metadata.labels in Deployment yaml.

menya
  • 1,459
  • 7
  • 8
  • Yes i just see my error(of beginer) on the dashboard, just restart nodejs and mongo pods now it's works – Lionel Piroche May 25 '19 at 21:31
  • Excuse me.. I have a problem connecting to mongodb in kubernetes https://stackoverflow.com/questions/65870380/connect-to-mongodb-with-mongoose-both-in-kubernetes . Kindly assist – Denn Jan 24 '21 at 17:36