Unable to access `metrics` information on CentOS 9 Stream

Host operating system: output of uname -a

  • I’m actually running Node Exporter inside Docker, so I’m printing system info outside the container.
Linux node1 5.14.0-39.el9.x86_64 #1 SMP PREEMPT Fri Dec 24 04:03:40 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

node_exporter version: output of node_exporter --version

node_exporter, version 1.3.1 (branch: HEAD, revision: a2321e7b940ddcff26873612bccdf7cd4c42b6b6)
  build user:       root@243aafa5525c
  build date:       20211205-11:09:49
  go version:       go1.17.3
  platform:         linux/amd64

node_exporter command line flags

ts=2022-04-17T12:50:01.180Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)"
ts=2022-04-17T12:50:01.180Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)"
ts=2022-04-17T12:50:01.182Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/)
ts=2022-04-17T12:50:01.182Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:108 level=info msg="Enabled collectors"
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=arp
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=bcache
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=bonding
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=btrfs
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=conntrack
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=cpu
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=cpufreq
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=diskstats
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=dmi
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=edac
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=entropy
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=fibrechannel
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=filefd
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=filesystem
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=hwmon
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=infiniband
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=ipvs
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=loadavg
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=mdadm
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=meminfo
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=netclass
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=netdev
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=netstat
ts=2022-04-17T12:50:01.184Z caller=node_exporter.go:115 level=info collector=nfs
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=nfsd
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=nvme
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=os
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=powersupplyclass
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=pressure
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=rapl
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=schedstat
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=sockstat
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=softnet
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=stat
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=tapestats
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=textfile
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=thermal_zone
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=time
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=timex
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=udp_queues
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=uname
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=vmstat
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=xfs
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:115 level=info collector=zfs
ts=2022-04-17T12:50:01.185Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100
ts=2022-04-17T12:50:01.186Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false

Are you running node_exporter in Docker?

  • Yes.

What did you do that produced an error?

  • When I run busybox wget http://localhost:9100/metrics inside the container. I am attaching my docker-compose.yml below.
version: "3.8"

services:
  node-exporter:
    image: prom/node-exporter:v1.3.1
    hostname: node-exporter.lingh.com
    command:
      - '--path.rootfs=/host'
    network_mode: host
    pid: host
    restart: unless-stopped
    ports:
      - "9100:9100"
    volumes:
      - '/:/host:ro,rslave'

What did you expect to see?

  • I expect to see some log, something like
go_gc_duration_seconds{quantile="0"} 7.7777e-05
go_gc_duration_seconds{quantile="0.25"} 0.000113756
go_gc_duration_seconds{quantile="0.5"} 0.000127199
go_gc_duration_seconds{quantile="0.75"} 0.000147778
go_gc_duration_seconds{quantile="1"} 0.000371894
go_gc_duration_seconds_sum 0.292994058
go_gc_duration_seconds_count 2029

What did you see instead?

  • Error message from wget that I don’t have permission to access metrics.
/ $ busybox wget http://localhost:9100/metrics
Connecting to localhost:9100 (127.0.0.1:9100)
wget: can't open 'metrics': Permission denied

Sorry I misunderstood the meaning of network_mode: host in docker-compose.yml. After removing network_mode: host, the actual data flow to otel/opentelemetry-collector normally. Go to Deployment section of the documentation for Docker Compose should indicate more information about `network_mode` · Issue #2355 · prometheus/node_exporter · GitHub.

You’re trying to use wget inside a container. The error is because wget can’t write out the file named “metrics” because the filesystem in the container is read-only.

This has nothing to do with network mode or opentelemetry.

@SuperQ I verified the network_mode problem on node_exporter ->opentelemetry-collector->SkyWalking OAP Server->SkyWalking UI. This is not difficult, I can provide a verified docker-comemess.yml in the issue or here if necessary.