Prometheus nodes shows down

Hi,

I have 2 Linux servers with Ubuntu 20.04 from Digital Ocean droplets, lets call my servers A and B.
In server A I have installed Prometheus and NodeExporter.
In server B I have installed Grafana and Laravel welcome page as Docker containers. Issue are with Prometheus nodes in server A, with Grafana, and Laravel. Grafana shows Up, but Laravel shows down - I don’t understood why Prometheus Laravel shows node Down? In server B both containers Grafana and Laravel runs fine. Check images.
So why its happens and where is problem? I don’t see any differences.
If any info is need, please free to ask!

Could you break it down a bit more?

I’m going to assume based on what you’ve described that the ip:8000 is laravel.
Following the typical IT script
Have you tried curl from the Server B on laravel? I don’t know laravel, but 404 is pretty clear. It seems like either a path issue or a setting not turned on.
If it’s a path problem then you’ll need to add another scraper configuration.

Now for the 3rd in the node I’d be willing to guess that there are a ton of metrics behind it which again you may want a separate scraper configuration and need to increase the timeout and potentially increase the scrape frequency.

Thanks for a answer, if I want in Prometheus add URL for monitoring, what config changes I need to edit? I gues the main Promethus config is prometheus.yml where I add and edit all changes and thats all?

My prometheus.yml bellow

# Sample config for Prometheus.

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'example'

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets: ['localhost:9093']

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    scrape_timeout: 5s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

  - job_name: node
    # If prometheus-node-exporter is installed, grab stats about the local
    # machine by default.
    static_configs:
      - targets: ['localhost:9100']
      - targets: ['161.35.30.24:8000']
      - targets: ['161.35.30.24:3000']
      - targets: ['192.168.114.61:8116']
      - targets: ['1.1.1.1:8060']

  - job_name: Latvijas radio
    # If prometheus-node-exporter is installed, grab stats about the local
    # machine by default.
    static_configs:
      - targets: ['159.148.56.67:8117']