I have a k8s cluster with an ipvs
kube-proxy mode and a database cluster outside of k8s.
In order to get access to the DB cluster I created service and endpoints resources:
---
apiVersion: v1
kind: Service
metadata:
name: database
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: database
subsets:
- addresses:
- ip: 192.168.255.9
- ip: 192.168.189.76
ports:
- port: 3306
protocol: TCP
Then I run a pod with MySQL client and try to connect to this service:
mysql -u root -p password -h database
In the network dump I see a successful TCP handshake and successful MySQL connection. On the node where the pod is running (hereinafter the worker node) I see the next established connection:
sudo netstat-nat -n | grep 3306
tcp 10.0.198.178:52642 192.168.189.76:3306 ESTABLISHED
Then I send some test queries from the pod in an opened MySQL session. They all are sent to the same node. It's expected behavior.
Then I monitor established connections on the worker node. After about 5 minutes the established connection to the database node is missed.
But in the network dump I see that TCP finalization packets are not sent from the worker node to the database node. As a result, I get a leaked connection on the database node.
How ipvs
decides to drop an established connection? If ipvs
drops a connection, why it doesn't finalize TCP connection properly? Is it a bug or do I misunderstand something with an ipvs
mode in kube-proxy?