Hi guys,
I’ve currently deployed prometheus(prometheus1) in agent mode and have configured it to scrape the prometheus in openshift-monitoring
namespace along with agent itself and federated prometheus in the same cluster. I’m also trying to send this scrapped metrics to another prometheus instance(prometheus2) through remote_write
. Below is my prometheus ConfigMap.
global:
scrape_interval: 300s
#evaluation_interval: 50s
scrape_timeout: 100s
scrape_configs:
- job_name: 'prometheus'
honor_labels: true
metrics_path: '/metrics'
static_configs:
- targets: ['localhost:9090']
- job_name: openshift-prometheus
honor_labels: true
metrics_path: '/federate'
authorization:
type: Bearer
credentials: <credential>
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ['<target-url>']
- job_name: prometheus-ap
honor_labels: true
metrics_path: '/metrics'
basic_auth:
username: <username>
password: <password>
scheme: https
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ['<target-url>']
remote_write:
- url: 'http://<svc-name>:9090/api/v1/write'
# tls_config:
# insecure_skip_verify: true
queue_config:
capacity: 10 # default = 500
max_shards: 4 # default = 1000
min_shards: 1 # default = 1
max_samples_per_send: 1 # default = 100
# batch_send_deadline: 5s # default = 5s
# min_backoff: 30ms # default = 30ms
# max_backoff: 100ms # default = 100ms
But in my prometheus2 instance I’m not seeing metrics scrapped by openshift-monitoring
prometheus. I can see the metric value up=1
for all the targets and also no error logs in prometheus pod. Below is the result for queries scrape_samples_scraped
scrape_samples_scraped{instance="<url>:443", job="prometheus-ap"} 731
scrape_samples_scraped{instance="<openshift-prometheus-url>:443", job="openshift-prometheus"} 0
scrape_samples_scraped{instance="localhost:9090", job="prometheus"} 381
up{instance="<url>:443", job="prometheus-ap"} 1
up{instance="<openshift-prometheus-url>:443", job="openshift-prometheus"} 1
up{instance="localhost:9090", job="prometheus"} 1