1

I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.

So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.

How can I debug this?

For reference, here is my nginx location:

location / {
        # Trick to avoid nginx aborting at startup (set server in variable)
        set $upstream_server ${APP_SERVER};

        include            uwsgi_params;
        uwsgi_pass         $upstream_server;
        uwsgi_read_timeout 300;
        uwsgi_intercept_errors on;
}

Here is my wsgi.ini:

[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true

uid = www-data
gid = www-data

Here is the kubernetes deployment.yaml for nginx:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: nginx
  name: nginx
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      service: nginx
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        service: nginx
    spec:
      imagePullSecrets:
      - name: docker-reg
      containers:
      - name: nginx
        image: <custom image url>
        imagePullPolicy: Always
        env:
          - name: APP_SERVER
            valueFrom:
              secretKeyRef:
                name: my-environment-config
                key: APP_SERVER
          - name: FK_SERVER_NAME
            valueFrom:
              secretKeyRef:
                name: my-environment-config
                key: SERVER_NAME
        ports:
        - containerPort: 80
        - containerPort: 10443
        - containerPort: 10090
        resources:
          requests:
            cpu: 1m
            memory: 200Mi
        volumeMounts:
        - mountPath: /etc/letsencrypt
          name: my-storage
          subPath: nginx
        - mountPath: /dev/shm
          name: dshm
      restartPolicy: Always
      volumes:
      - name: my-storage
        persistentVolumeClaim:
          claimName: my-storage-claim-nginx
      - name: dshm
        emptyDir:
          medium: Memory

Here is the kubernetes service.yaml for nginx:

apiVersion: v1
kind: Service
metadata:
  labels:
    service: nginx
  name: nginx
spec:
  type: LoadBalancer
  ports:
  - name: "nginx-port-80"
    port: 80
    targetPort: 80
    protocol: TCP
  - name: "nginx-port-443"
    port: 443
    targetPort: 10443
    protocol: TCP
  - name: "nginx-port-10090"
    port: 10090
    targetPort: 10090
    protocol: TCP
  selector:
    service: nginx

Here is the kubernetes deployment.yaml for python flask:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    service: my-app
  name: my-app
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      service: my-app
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        service: my-app
    spec:
      imagePullSecrets:
      - name: docker-reg
      containers:
      - name: my-app
        image: <custom image url>
        imagePullPolicy: Always
        ports:
        - containerPort: 5000
        resources:
          requests:
            cpu: 1m
            memory: 100Mi
        volumeMounts:
        - name: merchbot-storage
          mountPath: /app/data
          subPath: my-app
        - name: dshm
          mountPath: /dev/shm
        - name: local-config
          mountPath: /app/secrets/local_config.json
          subPath: merchbot-local-config-test.json
      restartPolicy: Always
      volumes:
      - name: merchbot-storage
        persistentVolumeClaim:
          claimName: my-storage-claim-app
      - name: dshm
        emptyDir:
          medium: Memory
      - name: local-config
        secret:
          secretName: my-app-local-config

Here is the kubernetes service.yaml for nginx:

apiVersion: v1
kind: Service
metadata:
  labels:
    service: my-app
  name: my-app
spec:
  ports:
  - name: "my-app-port-5000"
    port: 5000
    targetPort: 5000
  selector:
    service: my-app
Mr. Developerdude
  • 9,118
  • 10
  • 57
  • 95
  • You can `kubectl exec` into the nginx pod and see if you can ping the Python app and use netcat or curl to test that the app is accessible. – bcoughlan Feb 24 '20 at 22:59
  • 1
    Can you include the kubernetes deployment yaml? – Matt Feb 24 '20 at 22:59
  • @Matt I added my k8s yamls. I use kustomization but I guess that does not matter – Mr. Developerdude Feb 24 '20 at 23:58
  • @bcoughlan I did exec inn there to ping the container name, and when I ping it correctly resolves that to an IP address. However the question is really about how to debug the uwsgi protocol. If I netcat port 5000, what do I send to elicit a useful response? – Mr. Developerdude Feb 24 '20 at 23:59

1 Answers1

4

Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.

A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.

Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.

Where you have python available you can test with uswgi_curl

pip install uwsgi-tools
uwsgi_curl hostname:port /path

Otherwise nc/curl will suffice, to a point.

Pod to localhost

First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl

kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path

Pod to Pod/Service

Next include the kubernetes networking. Start with IP's and finish with names.

Less likely to have python here, or even nc but I think testing the environment variables is important here:

kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000

echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or 
uwsgi_curl $APP_SERVER:5000 /path

Debug Pod to Pod/Service

If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.

In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.

Node to Pod/Service

Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:

nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>

In this case:

nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000
Matt
  • 68,711
  • 7
  • 155
  • 158
  • The pip install uwsgi-tools && uwsgi_curl host:port /path was exactly what I needed. I now realized that nginx config is the problem and I can start figuring out what is going on there. Thanks! – Mr. Developerdude Feb 25 '20 at 01:46
  • @LennartRolland this is an excellent answer. If you need to attach a real debugger like VSCode to your pod, I wrote [an open source project](robusta playbooks trigger python_debugger name=myapp namespace=default process_substring=main ) that lets you do so in one command even if you're pod doesn't have the relevant debug tools. – Natan Yellin Dec 30 '21 at 14:49
  • Something got messed up in the previous comment, but link is here: https://docs.robusta.dev/master/catalog/actions/python-troubleshooting.html#python-debugger – Natan Yellin Jan 03 '22 at 06:36