Non-existent metrics

Prometheus v2.41.0, Prometheus-Operator v0.62.0.

Prometheus is configured to remote write to the Victoria Metrics.
For test create pod.

apiVersion: v1
kind: Pod
metadata:
  name: test
  namespace: default
spec:
  containers:
    - name: test
      image: ubuntu
      imagePullPolicy: Always
      stdin: true
      tty: true
      securityContext:
        privileged: true
      command:
        - /bin/bash
      livenessProbe:
        exec:
          command:
          - "false"
        periodSeconds: 5
        failureThreshold: 3

Top figure with Prometheus source.
Lower figure with Victoria metrics source.

Why metrics continued to come when the pod is gone?

Pod was killed at 14:56, but Prometheus metrics continued to come for another 4 minutes, until 15:00.

Can you tell me why this might happen?

When you ask Prometheus for metrics from a particular timestamp in general nothing will actually exist as that precise time. Scrapes happen periodically and the time isn’t known until it happens. So Prometheus actually looks for the most recent scrape value before the timestamp requested. By default it won’t look backwards forever and instead will only look back by a maximum of 5 minutes. As a result if a metric stops being collected it could still appear on graphs for a short period of time.

1 Like

Thanks for the answer!
Maybe you know if it is possible to adjust this time?

It is, but isn’t generally recommended. Why would you want to change it?

5 minutes is too long.
This leads to incorrect display on the dashboards.

Reducing it is likely to break things. For example the maximum scrape interval would reduce (generally around 2 minutes with the 5 minute staleness value).

1 Like

I just became interested.
I did not find a key in the documentation to change this parameter.
Can this only be set in source code?