1

New to the world of Istio, we are using managed Anthos service mesh on our GKE cluster. We have a service called pgbouncer deployed which is a connection pooler for PostgreSQL, we have few internal applications which connect to the pgbouncer service (pgbouncer.pgbouncer.svc.cluster.local) to access PostgreSQL DB.

Istio-proxy logs on pgbouncer pod:

[2023-02-02T17:30:11.633Z] "- - -" 0 - - - "-" 1649 1970 7 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:58765 10.243.34.74:5432 10.243.36.173:59516 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.654Z] "- - -" 0 - - - "-" 1645 1968 8 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:56153 10.243.34.74:5432 10.243.38.39:56404 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.674Z] "- - -" 0 - - - "-" 1647 1970 7 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:38471 10.243.34.74:5432 10.243.38.39:56414 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.696Z] "- - -" 0 - - - "-" 1647 1968 7 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:35135 10.243.34.74:5432 10.243.33.184:52074 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.716Z] "- - -" 0 - - - "-" 1646 1970 8 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:45277 10.243.34.74:5432 10.243.32.36:47044 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.738Z] "- - -" 0 - - - "-" 1644 1968 7 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:43099 10.243.34.74:5432 10.243.36.99:33514 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.757Z] "- - -" 0 - - - "-" 1649 1970 7 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:54943 10.243.34.74:5432 10.243.36.173:59530 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.777Z] "- - -" 0 - - - "-" 1644 1968 9 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:49555 10.243.34.74:5432 10.243.36.99:33524 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -
[2023-02-02T17:30:11.800Z] "- - -" 0 - - - "-" 1646 1970 8 - "-" "-" "-" "-" "10.243.34.74:5432" inbound|5432|| 127.0.0.6:51239 10.243.34.74:5432 10.243.32.36:47056 outbound_.5432_._.pgbouncer.pgbouncer.svc.cluster.local -

10.243.34.74 --> pgbouncer pod IP 10.243.32.36 --> ingress gateway Pod IP (not sure how the gateway is used here, as the internal apps hit pgbouncer.pgbouncer.svc.cluster.local)

Logs clearly show that there are inbound requests from internal apps.

but when we visualise the kaili kinda view provided by GCP we notice that the source to the pgbouncer service is unknown.

kiali We were in the notion that the sources will be the list of internal apps hitting the pgbouncer to reflect in the above connected graph for pgbouncer service.

Also checked the PromQL istio_requests_total{ app_kubernetes_io_instance="pgbouncer"} to get the number of requests and source.


istio_requests_total{app_kubernetes_io_instance="pgbouncer", app_kubernetes_io_name="pgbouncer", cluster="gcp-np-001", connection_security_policy="none", destination_app="unknown", destination_canonical_revision="latest", destination_canonical_service="pgbouncer", destination_cluster="cn-g-asia-southeast1-g-gke-non-prod-001", destination_principal="unknown", destination_service="pgbouncer", destination_service_name="InboundPassthroughClusterIpv4", destination_service_namespace="pgbouncer", destination_version="unknown", destination_workload="pgbouncer", destination_workload_namespace="pgbouncer", instance="10.243.34.74:15020", job="kubernetes-pods", kubernetes_namespace="pgbouncer", kubernetes_pod_name="pgbouncer-86f5448f69-qgpll", pod_template_hash="86f5448f69", reporter="destination", request_protocol="http", response_code="200", response_flags="-", security_istio_io_tlsMode="istio", service_istio_io_canonical_name="pgbouncer", service_istio_io_canonical_revision="latest", source_app="unknown", source_canonical_revision="latest", source_canonical_service="unknown", source_cluster="unknown", source_principal="unknown", source_version="unknown", source_workload="unknown", source_workload_namespace="unknown"}

Here source is again unknown, we have many request coming in from the internal apps which doesn't reflect in the promql or kaili kinda view. Not sure why the destination_service_name="InboundPassthroughClusterIpv4" is mentioned as passthrough ? Any insights is appreciated !

Sanjay M. P.
  • 919
  • 1
  • 16
  • 33

0 Answers0