Prometheus not working after configuring docker named volume

I have following docker-compose file for deploying my monitoring stack:

version: '3'

services:

  prometheus:
    image: prom/prometheus:latest
    volumes:
      - ./prometheus/prometheus.yaml:/etc/prometheus/prometheus.yaml
      # - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yaml'
      # - '--storage.tsdb.path=/prometheus'
      - '--storage.tsdb.retention.time=24w'
    ports:
      - 9090:9090

  node-exporter:
    image: prom/node-exporter
    ports:
      - 9100:9100

  cadvisor:
    image: google/cadvisor:latest
    ports:
      - 8081:8080
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro

  grafana:
    image: grafana/grafana
    user: "1000"
    volumes:
      - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
      - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
      - ./grafana/grafana.ini:/etc/grafana/grafana.ini
    depends_on:
      - prometheus
    ports:
      - 3032:3000
    
# volumes:
#   prometheus_data: 

It works perfectly fine:

enter image description here

I also have some grafana panels embedded in my web app, which also get rendered.

I wanted to persist the monitoring data. So I need to add named docker volume. To this, I un-commented the commented lines in above docker compose file. This made cadvisor unavailable to prometheus:

enter image description here

The embedded panels contained information from both node-exporter (CPU, RAM etc information for host machine) and cadvisor (CPU, RAM etc information for each docker container). All these panels started showing no data after un-commenting those lines! I felt only docker container related metrics should become unavailable given that only cadvisor is unavailable. But I was wrong, even host machine related panels too were down! However, I am still able to access all metrics end points from browser and they were returning huge list of metrics:

  1. Prometheus endpoint: http://192.168.0.104:9090/metrics
  2. cAdvisor endpoint: http://192.168.0.104:8081/metrics
  3. node-exporter endpoint: http://192.168.0.104:9100/metrics

What mistake I am making here?

PS: My prometheus.yaml looks like this:

global:
  scrape_interval: 5s
  external_labels:
    monitor: 'Monitoring'
scrape_configs:
  - job_name: 'prometheus' 
    static_configs: 
      - targets: ['prometheus:9090']
  - job_name: 'node-exporter' 
    static_configs: 
      - targets: ['node-exporter:9100'] 
  - job_name: 'cAdvisor' 
    static_configs:
      - targets: ['cadvisor:8080']