PromQL shows different results in two k8s clusters

I’ve been using Grafana Dashboards for some time to monitor K8’s cluster.
For this I use parts of the Grafana Dashboard page (Kubernetes cluster monitoring (via Prometheus)).

But I currently have different behavior on different K8s clusters, essentially it’s about this PromQL:

`sum (container_memory_working_set_bytes{id="/",kubernetes_io_hostname=~"^$Node$"}) / sum (machine_memory_bytes{kubernetes_io_hostname=~"^$Node$"}) * 100`

On one cluster I get a result the other reported: “N/A”

On the Prometheus GUI, I have already found out that the error is essentially related to the metric or the value {id=“/”}.

Here’s a test with the shortened PromQL:

container_memory_working_set_bytes{id="/"} 

This PromQL shows me all nodes belonging to the cluster on one K8s cluster on the other cluster only an “Empty query result”.

Does anyone have any ideas what is going wrong here?

That suggest that no metrics are being recorded of that type for the other cluster. What does the targets page look like? Any errors shown, and does it have the scrape config for that metric source? How are you configuring your scrape jobs - static config, Prometheus Operator, Kubernetes service discovery with relabling, something else?

I use the promtheus operator (Quay)
Automation ensures that the same version is always present on each cluster.

My PromQL mentioned has also worked in the past and displayed data on each cluster.
Unfortunately, I can’t find that much online or in communities, which is usually the indicator that you’ve got yourself into a problem.

I also don’t know where to start looking, whether it’s related to an update from dockerd to containerd, or some RPM update in the RedHat environment on the individual nodes?
The Metric/Value {id=“/”} is somehow not clear to me, what exactly it does.

As mentioned take a look at the Prometheus targets page. Look for any errors and see if that job exists.

I had already done that, found no errors.

And the job that would generate these metrics was listed?

Oh, did I look for errors in wrong place?
I checked for errors here: http://nodenameXYZ:9100/metrics

Basically, the current issue is only about the result of this PromQL on two different clusters - I’m attaching a screenshots here - ClusterA and B

You want to be looking at the /targets page in Prometheus. Specifically look at the “kubelet” job.

Ah, Thanks, got it - no errors here either.

So you see a job “kubelet” listed with no errors? What targets does it have?

Yes, I see some jobs “kubelet”, all without errors.

It’s not that the metrics don’t deliver any values, they all work, only one cluster unfortunately delivers values that differ from the other clusters.
I just want to know why that is.

It would be interesting for me how “container_memory_working_set_bytes{id=”/“}” behaves on others.

The bug was found.
Newer NodeExporters no longer show what is expected.

My “kube-prometheus-stack” is currently version 50.3.1 including node exporter version quay.io/prometheus/node-exporter:v1.5.0
Here the Prometheus GUI with the PromQL “container_memory_working_set_bytes{id=”/“}” unfortunately shows an “Empty query result”

Previously, the “kube-prometheus-stack” was used in version 34.9.0 - including the following node exporters:
quay.io/prometheus/node-exporter:v1.3.1 - I restored this version.
With this “kube-prometheus-stack” a correct result, at least what I want, is displayed with the PromQL specified above.