0

I hope someone can give me some help about this issue. I'm testing a containerized microservice over a kubernetes cluster made by 2 nodes: Merry -> master (and worker) Pippin -> worker This is my deployment:

kind: Deployment
metadata:
  name: resize
spec:
  selector:
    matchLabels:
      run: resize
  replicas: 1
  template:
    metadata:
      labels:
        run: resize
    spec:
      containers:
      - name: resize
        image: mdndocker/simpleweb
        ports:
        - containerPort: 1337
        resources:
          limits:
            cpu: 200m
          requests:
            cpu: 100m

This is the service:

apiVersion: v1
kind: Service
metadata:
  name: resize
  labels:
    run: resize
spec:
  type: ClusterIP
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 1337
  selector:
    run: resize

I'm using calico network.

I scaled the replicas before to 0 and than to 8 for have multiple instances of my app in both nodes.

NAME                      READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
locust-77c699c94d-k8ssz   1/1     Running   0          17m   192.168.61.160   pippin   <none>           <none>
resize-d8cd49f6c-2tk62    1/1     Running   0          64m   192.168.61.158   pippin   <none>           <none>
resize-d8cd49f6c-6g2f9    1/1     Running   0          64m   192.168.61.155   pippin   <none>           <none>
resize-d8cd49f6c-7795n    1/1     Running   0          64m   172.17.0.8       merry    <none>           <none>
resize-d8cd49f6c-jvw49    1/1     Running   0          64m   192.168.61.156   pippin   <none>           <none>
resize-d8cd49f6c-mml47    1/1     Running   0          64m   192.168.61.157   pippin   <none>           <none>
resize-d8cd49f6c-qpkpk    1/1     Running   0          64m   172.17.0.6       merry    <none>           <none>
resize-d8cd49f6c-t4t8z    1/1     Running   0          64m   172.17.0.5       merry    <none>           <none>
resize-d8cd49f6c-vmpkp    1/1     Running   0          64m   172.17.0.7       merry    <none>           <none>

I got some pods running on Pippin and others on Merry. Unfortunately the 4 pods scheduled on Merry don't get any pod when the load is generated:

NAME                      CPU(cores)   MEMORY(bytes)
locust-77c699c94d-k8ssz   873m         82Mi
resize-d8cd49f6c-2tk62    71m          104Mi
resize-d8cd49f6c-6g2f9    67m          107Mi
resize-d8cd49f6c-7795n    0m           31Mi
resize-d8cd49f6c-jvw49    78m          104Mi
resize-d8cd49f6c-mml47    73m          105Mi
resize-d8cd49f6c-qpkpk    0m           32Mi
resize-d8cd49f6c-t4t8z    0m           31Mi
resize-d8cd49f6c-vmpkp    0m           31Mi 

Do you know why this is happening? and what I can check for solve this issue? Do you know why the IP Address of pods are different on nodes even if I used the --pod-network-cidr=192.168.0.0/24 ?

thanks for who can help me!

1 Answers1

0

The pods which got deployed on master node "merry" are in running status so there no issue. For your other query why master node has different CIDR values if you have jq installed run "kubectl get node merry -o json | jq '.spec.podCIDR' which will give cidr values used. or you can describe master node.

Nataraj Medayhal
  • 980
  • 1
  • 2
  • 12