Connect Kubernetes metrics to Existing Prometheus

Hi, I have Prometheus server up and running and gather metrics from different linux and windows server. I recently created a 3 ubuntu nodes K8S cluster and would like to know if there is a way/exporter rom K8S to my prometheus/Grafana infra (something like linux or windows or databases exporters) .
I googles always I came across different articles and blogs and K8s do where they all talk about deploying prokeheus, grafana…inside K8S. is this the only possibility o get metrics from K8S cluster to prometheus/Greafana?Thanks in advance

So are you talking about scraping metrics from Kubernetes pods from an existing Prometheus server, that’s hosted outside of Kubernetes?

Yes, exact as we have our prometheus/grafana up and running since long time and work fine . K8S was setup to do a POC to validate migration of a monolithic app to microservices architecture using K8S to show to business people

Hi Stuart, any feedback on this, thansk?

So it is totally possible to have a Prometheus server outside of the cluster. You would however need a few things:

  1. Access to the internal cluster networking, so pods can be accessed for scraping (which could be by joining a service mesh overlay you are using for example)
  2. Use the kubernetes_sd option in Prometheus, which needs access to the Kubernetes API endpoint from your Prometheus server, with appropriate security token.

Alternatively it is often easier to run something within the Kubernetes cluster that does the scraping, and then use remote write to send that data to the external Prometheus. You can run full-fat Prometheus within the cluster, or just use “agent mode”. Another useful option it to additionally use the Prometheus Operator, which allows you to more easily setup scrape configurations via ServiceMonitor & PodMonitor custom resources.

Thanks for all those details, we would prefer to go with the 2nd option as it is the easiest and quickest way for our POC. Any url or instructions on how to use “kubernetes_sd option in Prometheus” as this is the 1st time we hear of this term, thanks again

The two options are:

  1. Use kubernetes_sd, which also means being able to access pods from the Prometheus server (e.g. via a service mesh or overlay network)
  2. Run Prometheus/Prometheus agent mode/Prometheus Operator within the cluster and remote write to the external Prometheus.

These are the docs for kubernetes_sd: Configuration | Prometheus

I visited the github at
prometheus/prometheus-kubernetes.yml at release-2.42 · prometheus/prometheus (github.com)

but it is not indicated where this file should be located. Should it be located on the K8S master machine somewhere on on the prometheus server which a remote server? Also, you indicated
with appropriate security token…
but the example only shows https connection without using access keys usage

The example you gave is the scrape configuration containing a few example jobs - these live in the prometheus.yaml file on your Prometheus server. However this example is designed to be run on a Prometheus that is itself hosted within Kubernetes.

You can use that as a basic example, but you’d need to look at adding the api_server, basic_auth, authorization or oauth2 options to configure the connection to the Kubernetes API (if you host Prometheus directly on Kubernetes it can figure those out automatically). You likely would need to set the tls_config settings too.

While it is totally possible to get it all working well (don’t forget you need the correct network access too to be able to access pods, etc.) I think most people find it easier to just run something within the cluster (e.g. Prometheus in agent mode) to handle the scraping instead of trying to do it from outside - and a fair number of people find using the Prometheus Operator useful for setting up scraping configurations.

stuart, to be frank, I am a little bit confused. Ok, I will drop the idea of using our prom and adopt/follow your recommendation

just run something within the cluster (e.g. Prometheus in agent mode) to handle the scraping…of people find using the Prometheus Operator useful for setting up scraping configurations

So should I install both, Prometheus in agent mode and Prometheus Operator inside the cluster? They run as pod or containers? Any idea of a tutorial or step-by-etep guide? Thanks for your help