1

I am attempting to scrape metrics from a single pod that has my web application deployed with a sidecar envoy proxy as part of an AWS App Mesh implementation. I would like to retrieve metrics from both for greater visibility, but so far I've only been able to figure out scraping one or the other.

I'm looking for something like this, which obviously is invalid

podAnnotations: {
  prometheus.io/scrape: "true",
  prometheus.io/path1: "/metrics",
  prometheus.io/port1: "3000",
  prometheus.io/path2: "/stats/prometheus",
  prometheus.io/port2: "9901"
}

Below is my scrape config, which currently fails to pick up the envoy proxy metrics, only the application metrics

  scrape_configs:
  - job_name: otel-agent
    scrape_interval: 10s
    static_configs:
    - targets:
      - $${K8S_POD_IP}:8889
  - job_name: 'kubernetes-pod-appmesh-envoy'
    sample_limit: 10000
    metrics_path: /stats/prometheus
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - action: keep
      regex: true
      source_labels:
      - __meta_kubernetes_pod_annotation_prometheus_io_scrape_envoy
    - source_labels: [__meta_kubernetes_pod_container_name]
      action: keep
      regex: '^envoy$'
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: ${1}:9901
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - action: replace
      source_labels:
      - __meta_kubernetes_namespace
      target_label: Namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: pod_name
    - action: replace
      source_labels:
      - __meta_kubernetes_pod_container_name
      target_label: container_name
    - action: replace
      source_labels:
      - __meta_kubernetes_pod_controller_name
      target_label: pod_controller_name
    - action: replace
      source_labels:
      - __meta_kubernetes_pod_controller_kind
      target_label: pod_controller_kind
    - action: replace
      source_labels:
      - __meta_kubernetes_pod_phase
      target_label: pod_phase
  - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - source_labels: [ __address__ ]
        regex: '.*9901.*'
        action: drop
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name

Istio provides by default a metric merger that publishes all metrics on the same endpoint, but of course AppMesh hasn't bothered with that yet. Is there a way to leverage the scrape config that will pick up both envoy and application metrics without having to go through the process of creating my own merger?

Please note, this question is distinctly different from questions like this one in that I have both differing metric ports and metric paths.

Emma Kelly
  • 29
  • 6
  • 1
    It's actually not that difficult to make `prometheus.io/path1` and so on. You just need to duplicate that `job_name: kubernetes-pods` and fix `source_labels` so that they use `__meta_kubernetes_pod_annotation_prometheus_io_path1` instead of `__meta_kubernetes_pod_annotation_prometheus_io_path` (same for other options like port etc). – anemyte Mar 02 '23 at 09:15

0 Answers0