Simple metrics from prometheus_client_php all messed up when querying in Grafana

Hello,

I just set up a /metrics endpoint for one of my services in my k8s cluster. Short info: Everything is set up via terraform, there are multiple applications running in the cluster, the monitoring stack consists of :

  • Grafana,
  • Grafana Agent,
  • Prometheus Operator CRDs (for PodMonitor)
  • Mimir.

The application exposing the metrics endpoint is written in PHP (Laravel Octane with Swoole).
Now everything works fine for all services but one (there’s a PodMonitor on top of each application and the PM is being scraped by Grafana Agents). When I curl the service’s /metrics endpoint manually everything looks normal so far, i.e.:

http_request_duration_seconds_count{path=“GET /health”,status_code=“200”} 187

http_request_duration_seconds_count{path=“GET /metrics”,status_code=“200”} 1

http_request_duration_seconds_count{path=“GET /test”,status_code=“200”} 2

Headers:

HTTP/1.1 200 OK
Cache-Control: no-cache, private
Date: Fri, 24 Nov 2023 07:37:53 GMT
Content-Type: text/plain; version=0.0.4; charset=UTF-8
Server: swoole-http-server
Connection: keep-alive
Content-Length: 3045

The health endpoint counter seems to increase due to the probes and the metrics endpoint due to my request. However, when I go to Grafana and query

http_request_duration_seconds_count{container=“test_app”}

where “test_app” is the name of the application, I get totally messed up results, e.g.

http_request_duration_seconds_count{container=“test-app”, endpoint=“port-15000”, …, path=“GET /health”, status_code=“200”} → Value = 12

http_request_duration_seconds_count{container=“test-app”, endpoint=“port-15000”, …, path=“GET /metrics”, status_code=“200”} → Value = 154

Same results for querying from Mimir directly. The “GET /metrics” value in Grafana increases over time by itself, just as the health endpoint does (as expected) in the exposed metrics of the application itself.

There’s exactly one pod running for this application.
The /test route is completely missing.

The pod names in the discovered labels returned from Mimir/Grafana are the correct Pods. Since it’s working fine for all the other services (NodeJS & NestJS).
Also checking the Grafana Agent config shows the target is correct.

I’m totally lost now.

Because this is not happening to any other service (written in NodeJS & NestJS) I don’t think it’s a misconfiguration of Grafana Agent or Mimir.

Any idea on what’s causing this issue or how to debug this?

Thanks in advance!