0

I tried solution in these links but didn't work Unable to reach service using NodePort from k8s master

I have setup kubernetes cluster in CentOS7 with a master and a worker. I deployed application with default kibana helm chart and created NodePort service as below

Kibana values.yaml (Service part)

service:
  type: NodePort
  loadBalancerIP: ""
  port: 5601
  nodePort: "30666"
  labels: {}
  annotations:
    {}
  loadBalancerSourceRanges:
    []
  httpPortName: http
[root@a199 kibana]# kubectl get endpoints | grep kibana
kibana-kibana                                 10.32.0.4:5601                                             2m16s
[root@a199 kibana]# kubectl service | grep kibana
kibana-kibana                   NodePort    10.107.134.247   <none>        5601:30666/TCP      2m26s
[root@a199 kibana]# kubectl describe service kibana-kibana
Name:                     kibana-kibana
Namespace:                default
Labels:                   app=kibana
                          app.kubernetes.io/managed-by=Helm
                          heritage=Helm
                          release=kibana
Annotations:              meta.helm.sh/release-name: kibana
                          meta.helm.sh/release-namespace: default
Selector:                 app=kibana,release=kibana
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.134.247
IPs:                      10.107.134.247
Port:                     http  5601/TCP
TargetPort:               5601/TCP
NodePort:                 http  30666/TCP
Endpoints:                10.32.0.4:5601
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

I am able to get the right response from all worker node through curl

In normal situation , curl kibana's service will give blank response

Master

[root@a199 kibana]# curl -u username:password http://10.10.1.200:30666
[root@a199 kibana]# curl -u username:password http://10.10.1.201:30666
[root@a199 kibana]# curl -u username:password http://10.10.1.202:30666

[root@a199 kibana]# curl -u username:password http://10.10.1.199:30666
curl: (7) Failed connect to 10.10.1.199:30666; No route to host

[root@a199 kibana]# curl -u username:password http://10.107.134.247:5601
curl: (7) Failed connect to 10.107.134.247:5601; No route to host

Worker

[root@a200 ~]# curl -u username:password http://10.107.134.247:5601
[root@a200 ~]# curl -u username:password http://10.32.0.4:5601

[root@a201 ~]# curl -u username:password http://10.107.134.247:5601
[root@a201 ~]# curl -u username:password http://10.32.0.4:5601

[root@a202 ~]# curl -u username:password http://10.107.134.247:5601
[root@a202 ~]# curl -u username:password http://10.32.0.4:5601

describe Weave pod on Master

[root@a199 kibana]# kdp weave-net-9tlbl -n kube-system
Name:                 weave-net-9tlbl
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      weave-net
Node:                 a199/10.10.1.199
Start Time:           Fri, 23 Sep 2022 11:28:18 +0800
Labels:               controller-revision-hash=5c5b66db5f
                      name=weave-net
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   10.10.1.199
IPs:
  IP:           10.10.1.199
Controlled By:  DaemonSet/weave-net
Init Containers:
  weave-init:
    Container ID:  containerd://db1d71e0b3d67f1de52c6f9b22c2b766f8b57d530a4a60f3815123a7af00c916
    Image:         ghcr.io/weaveworks/launcher/weave-kube:2.8.1
    Image ID:      ghcr.io/weaveworks/launcher/weave-kube@sha256:d797338e7beb17222e10757b71400d8471bdbd9be13b5da38ce2ebf597fb4e63
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/init.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 30 Sep 2022 11:43:32 +0800
      Finished:     Fri, 30 Sep 2022 11:43:33 +0800
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wg9t9 (ro)
Containers:
  weave:
    Container ID:  containerd://ef0c4b4891cb69dec5f39c6a30f40b2f49d7acd08a3630ee9f3169c5e24c3777
    Image:         ghcr.io/weaveworks/launcher/weave-kube:2.8.1
    Image ID:      ghcr.io/weaveworks/launcher/weave-kube@sha256:d797338e7beb17222e10757b71400d8471bdbd9be13b5da38ce2ebf597fb4e63
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Running
      Started:      Fri, 30 Sep 2022 11:43:34 +0800
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Fri, 23 Sep 2022 11:28:46 +0800
      Finished:     Fri, 30 Sep 2022 11:43:02 +0800
    Ready:          True
    Restart Count:  2
    Requests:
      cpu:      50m
      memory:   100Mi
    Readiness:  http-get http://127.0.0.1:6784/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:         (v1:spec.nodeName)
      INIT_CONTAINER:  true
    Mounts:
      /host/etc/machine-id from machine-id (ro)
      /host/var/lib/dbus from dbus (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wg9t9 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   containerd://4c691d20936557832cf998933ed9b32e0c58ee2e8510683be3c9e8f9445672dd
    Image:          ghcr.io/weaveworks/launcher/weave-npc:2.8.1
    Image ID:       ghcr.io/weaveworks/launcher/weave-npc@sha256:38d3e30a97a2260558f8deb0fc4c079442f7347f27c86660dbfc8ca91674f14c
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 30 Sep 2022 11:43:36 +0800
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Fri, 23 Sep 2022 11:28:45 +0800
      Finished:     Fri, 30 Sep 2022 11:43:02 +0800
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:     50m
      memory:  100Mi
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wg9t9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
...
...
Events:
  Type     Reason          Age                From     Message
  ----     ------          ----               ----     -------
  Normal   SandboxChanged  31m                kubelet  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          31m                kubelet  Container image "ghcr.io/weaveworks/launcher/weave-kube:2.8.1" already present on machine
  Normal   Created         31m                kubelet  Created container weave-init
  Normal   Started         31m                kubelet  Started container weave-init
  Normal   Pulled          31m                kubelet  Container image "ghcr.io/weaveworks/launcher/weave-kube:2.8.1" already present on machine
  Normal   Created         31m                kubelet  Created container weave
  Normal   Started         31m                kubelet  Started container weave
  Normal   Pulled          31m                kubelet  Container image "ghcr.io/weaveworks/launcher/weave-npc:2.8.1" already present on machine
  Normal   Created         31m                kubelet  Created container weave-npc
  Normal   Started         31m                kubelet  Started container weave-npc
  Warning  Unhealthy       31m (x2 over 31m)  kubelet  Readiness probe failed: Get "http://127.0.0.1:6784/status": dial tcp 127.0.0.1:6784: connect: connection refused

describe kube-proxy pod on Master

[root@a199 kibana]# kdp kube-proxy-wxq8n -n kube-system
Name:                 kube-proxy-wxq8n
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 a199/10.10.1.199
Start Time:           Fri, 30 Sep 2022 11:40:41 +0800
Labels:               controller-revision-hash=7ccf78d585
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   10.10.1.199
IPs:
  IP:           10.10.1.199
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  containerd://70d2ac2e135c570b3d263cd62cf8204179b21731f36b010b676a6a079d56ef16
    Image:         registry.k8s.io/kube-proxy:v1.25.2
    Image ID:      registry.k8s.io/kube-proxy@sha256:ddde7d23d168496d321ef9175a8bf964a54a982b026fb207c306d853cbbd4f77
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Fri, 30 Sep 2022 11:43:33 +0800
    Last State:     Terminated
      Reason:       Unknown
      Exit Code:    255
      Started:      Fri, 30 Sep 2022 11:40:41 +0800
      Finished:     Fri, 30 Sep 2022 11:43:01 +0800
    Ready:          True
    Restart Count:  1
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mh5qk (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
....
....
Events:
  Type     Reason          Age                From               Message
  ----     ------          ----               ----               -------
  Normal   Scheduled       37m                default-scheduler  Successfully assigned kube-system/kube-proxy-wxq8n to a199
  Normal   Pulled          37m                kubelet            Container image "registry.k8s.io/kube-proxy:v1.25.2" already present on machine
  Normal   Created         37m                kubelet            Created container kube-proxy
  Normal   Started         37m                kubelet            Started container kube-proxy
  Normal   SandboxChanged  34m                kubelet            Pod sandbox changed, it will be killed and re-created.
  Warning  Failed          34m                kubelet            Error: services have not yet been read at least once, cannot construct envvars
  Normal   Pulled          34m (x2 over 34m)  kubelet            Container image "registry.k8s.io/kube-proxy:v1.25.2" already present on machine
  Normal   Created         34m                kubelet            Created container kube-proxy
  Normal   Started         34m                kubelet            Started container kube-proxy

logs for kube-proxy pod on Master

[root@a199 kibana]# k logs -f kube-proxy-wxq8n -n kube-system
I0930 03:43:34.157573       1 node.go:163] Successfully retrieved node IP: 10.10.1.199
I0930 03:43:34.157668       1 server_others.go:138] "Detected node IP" address="10.10.1.199"
I0930 03:43:34.157705       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0930 03:43:34.852164       1 server_others.go:206] "Using iptables Proxier"
I0930 03:43:34.852212       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0930 03:43:34.852226       1 server_others.go:214] "Creating dualStackProxier for iptables"
I0930 03:43:34.852252       1 server_others.go:485] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR defined"
I0930 03:43:34.852265       1 server_others.go:541] "Defaulting to no-op detect-local" detect-local-mode="ClusterCIDR"
I0930 03:43:34.852311       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0930 03:43:34.852469       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0930 03:43:34.852763       1 server.go:661] "Version info" version="v1.25.2"
I0930 03:43:34.852783       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0930 03:43:34.859065       1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
I0930 03:43:34.859196       1 conntrack.go:100] "Set sysctl" entry="net/netfilter/nf_conntrack_tcp_timeout_close_wait" value=3600
I0930 03:43:34.862292       1 config.go:444] "Starting node config controller"
I0930 03:43:34.862314       1 shared_informer.go:255] Waiting for caches to sync for node config
I0930 03:43:34.862576       1 config.go:317] "Starting service config controller"
I0930 03:43:34.862591       1 shared_informer.go:255] Waiting for caches to sync for service config
I0930 03:43:34.862834       1 config.go:226] "Starting endpoint slice config controller"
I0930 03:43:34.862848       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0930 03:43:34.963269       1 shared_informer.go:262] Caches are synced for endpoint slice config
I0930 03:43:34.963337       1 shared_informer.go:262] Caches are synced for service config
I0930 03:43:34.963460       1 shared_informer.go:262] Caches are synced for node config

iptabels-save | grep kibana

[root@a199 kibana]# iptables-save | grep kibana
-A KUBE-EXT-5UZQ22B3WVFH6YRQ -m comment --comment "masquerade traffic for default/kibana-kibana:http external destinations" -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/kibana-kibana:http" -m tcp --dport 30666 -j KUBE-EXT-5UZQ22B3WVFH6YRQ
-A KUBE-SEP-7Z7G7JXEDAIUHWZM -s 10.32.0.4/32 -m comment --comment "default/kibana-kibana:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-7Z7G7JXEDAIUHWZM -p tcp -m comment --comment "default/kibana-kibana:http" -m tcp -j DNAT --to-destination 10.32.0.4:5601
-A KUBE-SERVICES -d 10.107.134.247/32 -p tcp -m comment --comment "default/kibana-kibana:http cluster IP" -m tcp --dport 5601 -j KUBE-SVC-5UZQ22B3WVFH6YRQ
-A KUBE-SVC-5UZQ22B3WVFH6YRQ -m comment --comment "default/kibana-kibana:http -> 10.32.0.4:5601" -j KUBE-SEP-7Z7G7JXEDAIUHWZM

I also Check the DNS through https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

Settings I have adjusted on master node with NoSchedule

Taints:             key=value:NoSchedule
                    node-role.kubernetes.io/control-plane:NoSchedule

Is there any solutions on this problem??

I want to get the service through Master curl -u username:password http://10.10.1.199:30666

I have another question, assuming my service does not have a pod assigned to the master, can I still get the right response through curl -u username:password http://master's ip:nodeport

  • I misunderstood, it turns out that there is no way to curl the service through the master, unless there is an ingress setting https://stackoverflow.com/questions/71637344/i-cant-access-service-via-k8s-master-node – 陳奕豪Paul Chen Sep 30 '22 at 09:02

0 Answers0