We use remote write and nodeexporter to forward host’s data to other system. The data from nodeexporter is correct. But we find data of filesystem from prometheus is error.The value and mountain is chaotic.We upgraded the Prometheus version from v2.17.1 to v2.28.1, but it’s still wrong.
Single-node or multi-node collection is problematic.
global: scrape_interval: 30s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 30s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). remote_write: - url: "http://ip:port/prometheusWrite?db=prometheus_metrics" remote_read: #- url: "http:///api/v1/prom/read?db=prometheus" scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' scrape_interval: 60s static_configs: - targets: ['localhost:9090'] - job_name: 'node_exporter_job' scrape_interval: 30s file_sd_configs: - files: - /etc/prometheus/targets/node_exportor_target.yml refresh_interval: 1m
Does anyone else have this problem? Thank you for your help.