Alerts are not sent from Prometheus to Alert Manager

Hi Team,

We have set up a log-analyzer for extracting 5xx or 4xx or 2xx codes in our environment. After extracting it send it to Prometheus which forwards it to the alert manager and we get alerts on Slack.

Below is the YAML that we have set up in prometheus server:

groups:
- name: microservices_http
** rules:**
** - alert: microservice_5xx**
** annotations:**
** description: ‘{{ $labels.service }} has been trigering http code: {{$labels.code}},**
** at least 5 time in 2 minutes.’**
** expr: rate(yaqoot_response_code_5xx_total[2m]) > 5**
** labels:**
** sevrity: page**
** - alert: semati-ser**

We can see that the Prometheus received data from the log-analyzer but did it send it to the alert manager or not? We were getting 5xx alerts for almost 3 hours however received alerts on slack only once.

We are suspecting that there is an issue between the alert manager and the Prometheus server.

Need help in understanding how we can check the logs of the Prometheus server and alert manager.

Your alert expression and description don’t match. You are using rate() which calculates the rate of change (so number of errors per second) while your description is talking about the overall increase in errors over the period. Maybe you are wanting the increase() function?

Thank you Stuart for replying.

We have multiple code alerts set up like 200 or 300 or 400 and we never faced any issue with these alerts. this is the first time we received alerts more than 1000 in a few hours.

The alert expression and description matched with our env however if you need any config files let me know. I can share it with you.