1

I have a RabbitMQ cluster running in Kubernetes. It works as expected without the Istio injection enabled. Unfortunately, the cluster crashes when the istio-injection is enabled. This is caused by the peer discovery failure during the startup process. I got the following error in the pod logs:

2022-06-23 12:39:59.907 [error] <0.249.0> CRASH REPORT Process <0.249.0> with 0 neighbours exited with reason: no match of right hand value {error,eacces} in rabbit_peer_discovery_k8s:make_request/0 line 110 in application_master:init/4 line 138

I created a service entry per pod, and expose all ports that are needed - telling istio that they are inside the service mesh, example of a ServiceEntry In rabbitMQ, but unfortunately I am still getting the same errors - this looks like this:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  labels:
    app: rabbitmq-ha
    chart: rabbitmq-ha-1.12.1
    heritage: Tiller
    release: rabbitmq
  name: rabbitmq
  namespace: rabbitns
spec:
  hosts:
  - rabbitmq-0.rabbitmq-discovery.rabbitns.svc.cluster.local
  - rabbitmq-1.rabbitmq-discovery.rabbitns.svc.cluster.local
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 15672
    protocol: TCP
  - name: amqp
    number: 5672
    protocol: TCP
  - name: epmd
    number: 4369
    protocol: TCP
  - name: amqps
    number: 5671
    protocol: TCP
  - name: exporter
    number: 9419
    protocol: TCP
  - name: inter-node
    number: 25672
    protocol: TCP
  resolution: NONE

Versions:

$ istioctl version
client version: 1.14.0
control plane version: 1.14.0
data plane version: 1.14.0 (3 proxies)
$ kubectl version --short
Client Version: v1.21.5
Server Version: v1.22.4

Additional Information: Analysis Report with the command istioctl bug-report

Error [IST0106] (ServiceEntry rabbitns/rabbitmq) Schema validation error: multiple hosts provided with non-HTTP, non-TLS ports

Apparently Istio discovers well regular services (that attach 1 single DNS to all the pods and then they are accessed in round robin style), but it is not familiar with per pod DNS - thus it rejects it. This issue is not related to the mTLS, since I got the same result with a mTLS enabled/disbaled.

Could you please help me solving this issue?

0 Answers0