Context dealine exceeded

i am getting status is down and error like " Get “http://10.138.0.3:8091/actuator/prometheus”: context deadline exceeded"

That means that the scrape didn’t complete before the timeout.

Usually it is due to some form of networking/firewall issue, but it can occasionally be due to an exporter taking a long time to respond.

The best thing to do is to use curl/wget from the same location as Prometheus (e.g. same VM or pod) to see if the request works. And then look at firewall logs, etc. to figure out the cause.

1 Like

@stuart i am getting metrics through curl on VM and we have opened firewall …
could u plzz tell me how can i get firewall logs …

Are you running curl with exactly the same URL that Prometheus is using, and from the same location?

When it being run how long is it taking to return data?

yes am using same URL and am running the curl from the Prometheus server and am getting metrics

I have same issue in Prometheus but starnge , thing is only when Main PC is OFF.
Having smb share from Main PC on Unraid Server .
Fixed by increased scrape_timeout: 15s from default .
but why is scrape interval increase to 10s when Main PC is OFF?
maybe trying to get info for smb share from Main PC?

YOU SAVED ME.

it was firewall issue