Prometheus nodes shows down


I have 2 Linux servers with Ubuntu 20.04 from Digital Ocean droplets, lets call my servers A and B.
In server A I have installed Prometheus and NodeExporter.
In server B I have installed Grafana and Laravel welcome page as Docker containers. Issue are with Prometheus nodes in server A, with Grafana, and Laravel. Grafana shows Up, but Laravel shows down - I don’t understood why Prometheus Laravel shows node Down? In server B both containers Grafana and Laravel runs fine. Check images.
So why its happens and where is problem? I don’t see any differences.
If any info is need, please free to ask!

Could you break it down a bit more?

I’m going to assume based on what you’ve described that the ip:8000 is laravel.
Following the typical IT script
Have you tried curl from the Server B on laravel? I don’t know laravel, but 404 is pretty clear. It seems like either a path issue or a setting not turned on.
If it’s a path problem then you’ll need to add another scraper configuration.

Now for the 3rd in the node I’d be willing to guess that there are a ton of metrics behind it which again you may want a separate scraper configuration and need to increase the timeout and potentially increase the scrape frequency.

Thanks for a answer, if I want in Prometheus add URL for monitoring, what config changes I need to edit? I gues the main Promethus config is prometheus.yml where I add and edit all changes and thats all?

My prometheus.yml bellow

# Sample config for Prometheus.

  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
      monitor: 'example'

# Alertmanager configuration
  - static_configs:
    - targets: ['localhost:9093']

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    scrape_timeout: 5s

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

      - targets: ['localhost:9090']

  - job_name: node
    # If prometheus-node-exporter is installed, grab stats about the local
    # machine by default.
      - targets: ['localhost:9100']
      - targets: ['']
      - targets: ['']
      - targets: ['']
      - targets: ['']

  - job_name: Latvijas radio
    # If prometheus-node-exporter is installed, grab stats about the local
    # machine by default.
      - targets: ['']