44

I have Prometheus configuration with many jobs where I am scraping metrics over HTTP. But I have one job where I need to scrape the metrics over HTTPS.

When I access:

https://ip-address:port/metrics

I can see the metrics. The job that I have added in the prometheus.yml configuration is:

- job_name: 'test-jvm-metrics'
    scheme: https
    static_configs:
      - targets: ['ip:port']

When I restart the Prometheus I can see an error on my target that says:

context deadline exceeded

I have read that maybe the scrape_timeout is the problem, but I have set it to 50 sec and still the same problem.

What can cause this problem and how to fix it? Thank you!

Andrzej Sydor
  • 1,373
  • 4
  • 13
  • 28
xmlParser
  • 1,903
  • 4
  • 19
  • 48

11 Answers11

35

Probably the default scrape_timeout value is too short for you

[ scrape_timeout: <duration> | default = 10s ]

Set a bigger value for scrape_timeout.

scrape_configs:
  - job_name: 'prometheus'

    scrape_interval: 5m
    scrape_timeout: 1m

Take a look here https://github.com/prometheus/prometheus/issues/1438

Mykola Shorobura
  • 697
  • 6
  • 12
10

I had a same problem in the past. In my case the problem was with the certificates and I fixed it with adding:

 tls_config:
      insecure_skip_verify: true

You can try it, maybe it will work.

Bambus
  • 1,493
  • 1
  • 15
  • 32
  • 5
    It's not working for me. I have tried to put tls_config tag. Howerver the problem still the same :( – Danilo Caetano Sep 28 '18 at 17:15
  • My problem was the exact opposite, insecure_skip_verify was causing issues in _redis_ plugin. Although `insecure_skip_verify` was a high level config not a child under the `tls_config` . – Hamed Nemati Nov 24 '21 at 14:53
6

I had a similar problem, so I tried to extend my scrape_timeout but it didn't do anything - using promtool, however, explained the problem

My problematic job looked like this:

- job_name: 'slow_fella'
  scrape_interval: 10s
  scrape_timeout: 90s
  static_configs:
  - targets: ['192.168.1.152:9100']
    labels:
      alias: sloooow    

check your config in /etc/prometheus dir, type this:

promtool check config prometheus.yml

Result explains the problem and indicates how to solve it:

Checking prometheus.yml
  FAILED: parsing YAML file prometheus.yml: scrape timeout greater than scrape interval for scrape config with job name "slow_fella"

Just ensure that your scrape_timeout is long enough to accommodate your required scrape_interval.

Thusitha Sumanadasa
  • 1,669
  • 2
  • 22
  • 30
  • I did this as `scrape_interval: 5m scrape_timeout: 1m` .But the problem is same. After checking the promtool config it says `SUCCESS: prometheus.yml is valid prometheus config file syntax`. But the thing is curl data of metrics are visible.`(ip:port/metrics)` – Thusitha Sumanadasa Sep 27 '22 at 08:36
1

This can be happened when the prometheus server can't reach out to the scraping endpoints maybe of firewall denied rules. Just check hitting the url in a browser with <url>:9100 (here 9100 is the node_exporter service running port`) and check if you still can access?

Jananath Banuka
  • 2,951
  • 8
  • 57
  • 105
1

I was facing this issue due to max connections reached. I increased the max_connections parameter in database and released some connections. Then Prometheus was able to scrape metrics again.

0

in my case it was issue with IPv6. I have blocked IPv6 with ip6tables, but it also blocked prometheus traffic. Correct IPv6 settings solved issue for me

Andrew Zhilin
  • 1,654
  • 16
  • 11
0

In my case I had accidentally put the wrong port on my Kubernetes Deployment manifest than what was defined in the service associated with it as well as the Prometheus target.

TJ Zimmerman
  • 3,100
  • 25
  • 39
0

Increasing the timeout to 1m helped me to fix a similar issue

0

We Started facing similar issue when we re-configured istio-system namespace and its istio-component. We also had prometheus install via prometheus-operator in monitoring namespace where istio-injection was enabled.

Restarting the promtheus components of the monitoring (istio-injection enabled) namespace resolved the issue.

GangaRam Dewasi
  • 631
  • 7
  • 11
0

On AWS, for me opening port(for prometheus) in SG, Worked

0

For me the problem was that i was running the exporter inside an ec2 instance and forgot to allow tcp connections for the listen port in the security group (also check the routing of your subnets). So the prometheus container could not connect to the listen port of my exporter's machine.

inside the prometheus container you can run wget exporterIp:listenPort, if it does not return anything/not connecting, there may be a network issue.

Andreea
  • 11
  • 1