That suggest that no metrics are being recorded of that type for the other cluster. What does the targets page look like? Any errors shown, and does it have the scrape config for that metric source? How are you configuring your scrape jobs - static config, Prometheus Operator, Kubernetes service discovery with relabling, something else?
I use the promtheus operator (Quay)
Automation ensures that the same version is always present on each cluster.
My PromQL mentioned has also worked in the past and displayed data on each cluster.
Unfortunately, I can’t find that much online or in communities, which is usually the indicator that you’ve got yourself into a problem.
I also don’t know where to start looking, whether it’s related to an update from dockerd to containerd, or some RPM update in the RedHat environment on the individual nodes?
The Metric/Value {id=“/”} is somehow not clear to me, what exactly it does.
Yes, I see some jobs “kubelet”, all without errors.
It’s not that the metrics don’t deliver any values, they all work, only one cluster unfortunately delivers values that differ from the other clusters.
I just want to know why that is.
It would be interesting for me how “container_memory_working_set_bytes{id=”/“}” behaves on others.
The bug was found.
Newer NodeExporters no longer show what is expected.
My “kube-prometheus-stack” is currently version 50.3.1 including node exporter version quay.io/prometheus/node-exporter:v1.5.0
Here the Prometheus GUI with the PromQL “container_memory_working_set_bytes{id=”/“}” unfortunately shows an “Empty query result”
Previously, the “kube-prometheus-stack” was used in version 34.9.0 - including the following node exporters: quay.io/prometheus/node-exporter:v1.3.1 - I restored this version.
With this “kube-prometheus-stack” a correct result, at least what I want, is displayed with the PromQL specified above.