0

I have a kubernetes cluster on GCP made of two nodes. I have pod -> mycha-deploy, with service -> mycha-svc, also I have pod nginx-controller with service nginx-svc. When I try to curl into the pods or services ips I keep getting: port 80 conection refused. When I browse into the master ip I don't get anything. Is there something I am missing in the configuration. Thank you.

# mycha-deploy
apiVersion: apps/v1
kind:  Deployment
metadata:
  name: mycha-deploy
  labels:
    app: mycha-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mycha-app
  template:
    metadata:
      labels:
        app: mycha-app
    spec:
      containers:
        - name: mycha-container
          image: us.gcr.io/########/mycha-frontend_kubernetes_rrk8s
          ports:
          - containerPort: 80

#mycha-svc
apiVersion: v1
kind: Service
metadata:
  name: mycha-svc
  labels: 
    app: mycha-app
spec:
  selector:
    app: mycha-app
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http

#nginx-controller
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      name: nginx-ingress
  template:
    metadata:
      labels:
        name: nginx-ingress
    spec:
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.27.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
          env:
            - name: POD_NAME
              valueFrom: 
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443

#nignx-svc
apiVersion: v1
kind: Service
metadata:
  name: nginx-ingress
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports: 
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    name: nginx-ingress


##nginx-resource
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: mycha-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - http:
      paths:
        - path: /
          backend:
            serviceName: mycha-svc
            servicePort: 80

-----

kubectl describe svc nginx-ingress
Name:                     nginx-ingress
Namespace:                default
Labels:                   app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
Annotations:              <none>
Selector:                 name=nginx-ingress
Type:                     NodePort
IP:                       10.107.186.83
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  32606/TCP
Endpoints:                10.244.1.3:80
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  31481/TCP
Endpoints:                10.244.1.3:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

-------

kubectl get pods,svc
NAME                                    READY   STATUS    RESTARTS   AGE
pod/mycha-deploy-5f9b6f5c46-jjdhq       1/1     Running   0          76m
pod/nginx-controller-5c45cf6d5c-dpp44   1/1     Running   0          60m

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP                      100m
service/mycha-svc       ClusterIP   10.103.188.25   <none>        80/TCP                       68m
service/nginx-ingress   NodePort    10.107.186.83   <none>        80:32606/TCP,443:31481/TCP   51m



------

sudo lsof -i -P -n | grep LISTEN
systemd-r   890 systemd-resolve   13u  IPv4     16536      0t0  TCP 127.0.0.53:53 (LISTEN)
splunkd    1111            root    4u  IPv4     25377      0t0  TCP *:8089 (LISTEN)
sshd       1842            root    3u  IPv4     23916      0t0  TCP *:22 (LISTEN)
sshd       1842            root    4u  IPv6     23931      0t0  TCP *:22 (LISTEN)
kube-cont 22737            root    5u  IPv6 116157110      0t0  TCP *:10252 (LISTEN)
kube-cont 22737            root    6u  IPv4 116157116      0t0  TCP 127.0.0.1:10257 (LISTEN)
kube-prox 23291            root    8u  IPv6 116256894      0t0  TCP *:31481 (LISTEN)
kube-prox 23291            root   11u  IPv6 116256895      0t0  TCP *:32606 (LISTEN)
kube-prox 23291            root   16u  IPv6 116164057      0t0  TCP *:10256 (LISTEN)
kube-prox 23291            root   17u  IPv4 116164061      0t0  TCP 127.0.0.1:10249 (LISTEN)
etcd      23380            root    3u  IPv4 116158620      0t0  TCP 10.242.6.2:2380 (LISTEN)
etcd      23380            root    5u  IPv4 116158624      0t0  TCP 10.242.6.2:2379 (LISTEN)
etcd      23380            root    6u  IPv4 116158625      0t0  TCP 127.0.0.1:2379 (LISTEN)
etcd      23380            root   11u  IPv4 116157996      0t0  TCP 127.0.0.1:2381 (LISTEN)
kube-sche 23803            root    5u  IPv6 116159474      0t0  TCP *:10251 (LISTEN)
kube-sche 23803            root    6u  IPv4 116159480      0t0  TCP 127.0.0.1:10259 (LISTEN)
kube-apis 24180            root    5u  IPv6 116163385      0t0  TCP *:6443 (LISTEN)
node      27844     robertorios   20u  IPv4 116024875      0t0  TCP 127.0.0.1:38509 (LISTEN)
kubelet   30601            root   10u  IPv4 116038855      0t0  TCP 127.0.0.1:33119 (LISTEN)
kubelet   30601            root   17u  IPv6 116038993      0t0  TCP *:10250 (LISTEN)
kubelet   30601            root   31u  IPv4 116038997      0t0  TCP 127.0.0.1:10248 (LISTEN)

Thank you.

Roberto Rios
  • 39
  • 1
  • 6
  • check if port 80 is listening on the worker node where the pod is deployed...pod will not be deployed on master node...so you are looking at wrong node – Arghya Sadhu Jan 20 '20 at 13:55
  • It looks like you're installing an ingress controller; do you have a matching Ingress resource? What URL are you trying to connect to, from where, and what's the specific error you're getting? – David Maze Jan 20 '20 at 13:56
  • @DavidMaze I just edit my post with my ingress-resource and I get the same error. I want to see my site(front-end pod:mycha-app) when I typed my server ip in the browser. Then I would like to point my load balancer to this server ip. The error I am getting when i try to curl the ingress-controller ip is: ~/my_chatest$ curl http://10.244.1.3 curl: (7) Failed to connect to 10.244.1.3 port 80: Connection refused – Roberto Rios Jan 20 '20 at 14:56
  • @ArghyaSadhu on slave node port 80 and port 443 are being used by dockerd – Roberto Rios Jan 20 '20 at 14:59

4 Answers4

1

On GKE you can use two types of Ingress. One is Nginx Ingress which you probably wanted to use based on annotation kubernetes.io/ingress.class: "nginx". Second one is GKE Ingress.

1. GKE Ingress

When you want to use GKE Ingress you need to specify your service as NodePort and apply Ingress. Based on your YAMLs, I've reproduced it.

As you used your own image, Ive used nginx image.

apiVersion: apps/v1
kind:  Deployment
metadata:
  name: mycha-deploy
  labels:
    app: mycha-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mycha-app
  template:
    metadata:
      labels:
        app: mycha-app
    spec:
      containers:
        - name: mycha-container
          image: nginx
          ports:
          - containerPort: 80

---
#added type: Nodeport
apiVersion: v1
kind: Service
metadata:
  name: mycha-svc
  labels: 
    app: mycha-app
spec:
  type: NodePort 
  selector:
    app: mycha-app
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http

---

#removed annotation, as here we are using GKE Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: mycha-ingress
spec:
  rules:
    - http:
        paths:
        - path: /
          backend:
            serviceName: mycha-svc
            servicePort: 80


deployment.apps/mycha-deploy created
service/mycha-svc created
ingress.extensions/mycha-ingress created

You should be able to see output like below:

$ kubectl get pods,svc,ing
NAME                                READY   STATUS    RESTARTS   AGE
pod/mycha-deploy-685f894996-xbbnv   1/1     Running   0          38s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.99.0.1     <none>        443/TCP        33d
service/mycha-svc    NodePort    10.99.13.51   <none>        80:30808/TCP   39s

NAME                               HOSTS   ADDRESS        PORTS   AGE
ingress.extensions/mycha-ingress   *       34.107.251.59  80      3m3s

Now you should be able to curl your svc.

$ curl 34.107.251.59
...
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

As you are using GKE Ingress, your Ingress will automatically received EXTERNAL-IP. If you will keep service as ClusterIP it won't receive any Address.

$ kubectl get ing
NAME            HOSTS   ADDRESS   PORTS   AGE
mycha-ingress   *                 80      34m

In the Service manifest, notice that the type is NodePort. This is the required type for an Ingress that is used to configure an HTTP(S) load balancer. More detailed information can be found here.

2. Nginx Ingress on GKE

When you are using Nginx Ingress you can specify your service as ClusterIP or NodePort.

To do that you need to deploy proper Nginx Ingress. Good tutorail can be found here, however it's bit outdated. I am posting below updated steps:

  • Install Helm v3. This version don't require tiller.
  • Add proper repository for Helm 3. Details can be found here.

Adding and updating repo:

$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
  • Apply your deploymnet and service (NodePort or ClusterIP, using Nginx Ingress both types will work).
  • Deploy Nginx Ingress using $ helm install ingress-nginx ingress-nginx/ingress-nginx. It will create 2 deployments and 2 services. One of services will be created as LoadBalancer.
  • Deploy Ingress

With annotation.kubernetes.io/ingress.class: "nginx"

apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: mycha-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - http:
      paths:
        - path: /
          backend:
            serviceName: mycha-svc
            servicePort: 80

You should have output like:

$ kk get pods,svc,ing
NAME                                                READY   STATUS    RESTARTS   AGE
pod/mycha-deploy-c469dc58b-mdp6d                    1/1     Running   0          2m41s
pod/nginx-ingress-controller-5d47f75dfc-d6xnl       1/1     Running   0          7m18s
pod/nginx-ingress-default-backend-f5b888f7d-rf5cx   1/1     Running   0          7m18s

NAME                                    TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
service/kubernetes                      ClusterIP      10.99.0.1      <none>          443/TCP                      33d
service/mycha-svc                       ClusterIP      10.99.8.140    <none>          80/TCP                       2m12s
service/nginx-ingress-controller        LoadBalancer   10.99.11.177   34.90.172.116   80:31593/TCP,443:30104/TCP   7m19s
service/nginx-ingress-default-backend   ClusterIP      10.99.7.106    <none>          80/TCP                       7m19s

NAME                               HOSTS   ADDRESS   PORTS   AGE
ingress.extensions/mycha-ingress   *                 80      17s

Your Ingress will not receive any Address as service/nginx-ingress-controller will work as LoadBalancer.

Now you can check if everything works using curl.

$ curl 34.90.172.116
<!DOCTYPE html>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
PjoterS
  • 12,841
  • 1
  • 22
  • 54
0

Way : 1

After deploying the Nginx ingress controller you have to setup ingress which will divert your traffic to service.

The flow will something look like this ingress rule > service > deployment > pod.

For more details you can check this tutorial from Digital ocean : Click here

Way : 2

If you directly want to expose service you can update service type to load balancer and use that IP address to access service directly.

To expose via service as Load balancer you can check this out : Click here

Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
  • 1
    i have deployed the resource but i still get the same error when trying to curl into pod ip , or cluster ip or service ip. Am I missing something on my master node? i do not see anything running on port 80. – Roberto Rios Jan 21 '20 at 21:06
  • 1
    I found my problem, the service i was running is a node service running on port 3000. I had to add containerPort: 3000 to deploy and targetPort: 3000 on service. Thank you all for your help – Roberto Rios Jan 23 '20 at 07:50
0

You should be able to access the service via nodeport and the public ip of the worker node where nginx ingress controller is deployed. This is because you have deployed the nginx ingress as nodeport.

curl http://public-ip-of-worker-node:32606 
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
  • I had to restart my nodes so ports have changed service/nginx-ingress NodePort 10.99.127.116 80:30341/TCP,443:30992/TCP 6h54m curl http://public-master-node-where-nginx-controller-is-running:30341 curl: (7) Failed to connect to ######## port 10256: Connection timed out – Roberto Rios Jan 20 '20 at 21:38
  • sorry curled on ######:30341 and 30992, both of them times out. – Roberto Rios Jan 20 '20 at 21:45
  • in gcp you probably need to open firewalls/security groups to allow traffic – Arghya Sadhu Jan 21 '20 at 02:52
  • traffic is allowed pon port 80,443, 31000 and still same problem. On my master node i do not have anything running on port 80..should i have a webserver running on it or is there a kubernetes service for webserver? – Roberto Rios Jan 21 '20 at 21:02
  • Master node will not have your pod..it will have kubernetes API server...pod will be in worker nodes..can you allow traffic for node ports to your worker node vms – Arghya Sadhu Jan 22 '20 at 04:10
0

I found my problem, the service i was running is a node service running on port 3000. I had to add containerPort: 3000 to deploy and targetPort: 3000 on service. Thank you all for your help

Roberto Rios
  • 39
  • 1
  • 6