I have a Kubernetes v1.4 cluster running in AWS with auto-scaling nodes. I also have a Mongo Replica Set cluster with SSL-only connections (FQDN common-name) and public DNS entries:
- node1.mongo.example.com -> 1.1.1.1
- node2.mongo.example.com -> 1.1.1.2
- node3.mongo.example.com -> 1.1.1.3
The Kubernetes nodes are part of a security group that allows access to the mongo cluster, but only via their private IPs.
Is there a way of creating A records in the Kubernetes DNS with the private IPs when the public FQDN is queried?
The first thing I tried was a script & ConfigMap combination to update /etc/hosts on startup (ref. Is it a way to add arbitrary record to kube-dns?), but that is problematic as other Kubernetes services may also update the hosts file at different times.
I also tried a Services & Enpoints configuration:
---
apiVersion: v1
kind: Service
metadata:
name: node1.mongo.example.com
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Endpoints
metadata:
name: node1.mongo.example.com
subsets:
- addresses:
- ip: 192.168.0.1
ports:
- port: 27017
But this fails as the Service name cannot be a FQDN...