Node_exporter collection metrics duration

I have never seen any value about how long it takes a node_exporter which run as a docker container to collect metrics in a regular 16 GB 2 CPU machine ? Now I see values like 48 seconds and more than 1 minutes. Is it normal ? Or do I need some modification ?

In this link : Link

there is comment like

“Strictly speaking, this isn’t a bug. We’ve added a number of features to the systemd collector, which means it’s just going to take longer due to how slow systemd dbus requests are.”

So Is it normal ? Do I need to worry ? Below my collector duration seconds.

node_scrape_collector_duration_seconds{collector=“arp”} 0.022299501
node_scrape_collector_duration_seconds{collector=“bcache”} 2.6546e-05
node_scrape_collector_duration_seconds{collector=“bonding”} 1.407e-05
node_scrape_collector_duration_seconds{collector=“btrfs”} 0.021858464
node_scrape_collector_duration_seconds{collector=“conntrack”} 6.0172e-05
node_scrape_collector_duration_seconds{collector=“cpu”} 0.016586098
node_scrape_collector_duration_seconds{collector=“cpufreq”} 7.3239e-05
node_scrape_collector_duration_seconds{collector=“diskstats”} 0.000292871
node_scrape_collector_duration_seconds{collector=“dmi”} 2.4215e-05
node_scrape_collector_duration_seconds{collector=“edac”} 2.31e-05
node_scrape_collector_duration_seconds{collector=“entropy”} 0.022390953
node_scrape_collector_duration_seconds{collector=“fibrechannel”} 1.9641e-05
node_scrape_collector_duration_seconds{collector=“filefd”} 4.3265e-05
node_scrape_collector_duration_seconds{collector=“filesystem”} 0.020858399
node_scrape_collector_duration_seconds{collector=“hwmon”} 0.000245236
node_scrape_collector_duration_seconds{collector=“infiniband”} 1.2587e-05
node_scrape_collector_duration_seconds{collector=“ipvs”} 0.016111786
node_scrape_collector_duration_seconds{collector=“loadavg”} 0.02341834
node_scrape_collector_duration_seconds{collector=“mdadm”} 3.1899e-05
node_scrape_collector_duration_seconds{collector=“meminfo”} 0.00017979
node_scrape_collector_duration_seconds{collector=“netclass”} 0.05821121
node_scrape_collector_duration_seconds{collector=“netdev”} 0.019214861
node_scrape_collector_duration_seconds{collector=“netstat”} 0.021276428
node_scrape_collector_duration_seconds{collector=“nfs”} 0.016097618
node_scrape_collector_duration_seconds{collector=“nfsd”} 0.016128294
node_scrape_collector_duration_seconds{collector=“nvme”} 0.022412158
node_scrape_collector_duration_seconds{collector=“os”} 0.02067779
node_scrape_collector_duration_seconds{collector=“powersupplyclass”} 0.022216052
node_scrape_collector_duration_seconds{collector=“pressure”} 9.6221e-05
node_scrape_collector_duration_seconds{collector=“rapl”} 1.5078e-05
node_scrape_collector_duration_seconds{collector=“schedstat”} 0.020904082
node_scrape_collector_duration_seconds{collector=“selinux”} 0.021288755
node_scrape_collector_duration_seconds{collector=“sockstat”} 0.000119152
node_scrape_collector_duration_seconds{collector=“softnet”} 0.018876854
node_scrape_collector_duration_seconds{collector=“stat”} 0.000131606
node_scrape_collector_duration_seconds{collector=“tapestats”} 0.016121241
node_scrape_collector_duration_seconds{collector=“textfile”} 0.021268253
node_scrape_collector_duration_seconds{collector=“thermal_zone”} 0.000232663
node_scrape_collector_duration_seconds{collector=“time”} 8.5808e-05
node_scrape_collector_duration_seconds{collector=“timex”} 0.018890733
node_scrape_collector_duration_seconds{collector=“udp_queues”} 0.000232812
node_scrape_collector_duration_seconds{collector=“uname”} 1.0764e-05
node_scrape_collector_duration_seconds{collector=“vmstat”} 0.020804387
node_scrape_collector_duration_seconds{collector=“xfs”} 0.022317828
node_scrape_collector_duration_seconds{collector=“zfs”} 0.021259882

This is likely caused by a recent kernel version that causes concurrent reads to /sys to deadlock. The two workarounds we have right now are to downgrade your kernel to avoid the kernel bug. Or add GOMAXPROCS=1 to the env vars of your node_exporter.

See: https://github.com/prometheus/node_exporter/issues/2500

1 Like

Thank you. GOMAXPROCS=1 helped!